1. What is deep learning? What can deep learning do that traditional machine-learning methods cannot? 2. List and briefly explain different learning paradigms/ methods in AI. 3. What is representation learning, and how does it relate to machine learning and deep learning? 4. List and briefly describe the most commonly used ANN activation functions. 5. What is MLP, and how does it work? Explain the function of summation and activation weights in MLP-type ANN. 6.Cognitive computing has become a popular term to define and characterize the extent of the ability of machines/ computers to show “intelligent” behavior. Thanks to IBM Watson and its success on Jeopardy!, cognitive computing and cognitive analytics are now part of many realworld intelligent systems. In this exercise, identify at least three application cases where cognitive computing was used to solve complex real-world problems. Summarize your findings in a professionally organized report. NOTE :  be sure to include an APA cover page and include at least two APA formatted references (and APA in-text citations) paper should be 1-2 pages lenth.

Deep learning is a subfield of machine learning that is based on artificial neural networks (ANNs) and aims to replicate the way the human brain learns and analyzes information. It is known for its ability to automatically learn complex patterns and representations from large amounts of data, without requiring explicit programming.

One of the key advantages of deep learning over traditional machine learning methods is its ability to handle high-dimensional data. Deep neural networks with multiple layers of neurons can effectively extract hierarchical representations from the data, allowing for better understanding and interpretation. This is particularly useful in tasks such as image and speech recognition, where the input data has a high level of complexity and variability.

Traditional machine learning methods typically rely on handcrafted features that need to be explicitly defined and designed by domain experts. This can be time-consuming and may not always capture the full complexity of the data. In contrast, deep learning algorithms can automatically learn relevant features and representations directly from the raw data, reducing the dependency on manual feature engineering.

Different learning paradigms and methods exist in the field of artificial intelligence (AI). These include supervised learning, unsupervised learning, reinforcement learning, and semi-supervised learning. Supervised learning involves training the model on labeled data, where each input has a corresponding desired output. Unsupervised learning, on the other hand, deals with unlabeled data and aims to discover hidden patterns or structures in the data. Reinforcement learning focuses on training an agent to interact with an environment and learn optimal actions based on rewards and punishments. Semi-supervised learning is a combination of supervised and unsupervised learning, where a small portion of the data is labeled and the rest is unlabeled.

Representation learning is a fundamental concept in machine learning and deep learning. It refers to the process of learning useful representations or features from raw data, which can then be used for various downstream tasks such as classification or regression. Representation learning is closely related to the concept of feature extraction, where the goal is to transform the input data into a more compact and informative representation. Deep learning, with its ability to automatically learn hierarchical representations, is particularly effective in representation learning.

There are several commonly used activation functions in artificial neural networks, including the sigmoid function, the hyperbolic tangent function, the rectified linear unit (ReLU), and the softmax function. The sigmoid function is a smooth S-shaped function that maps the input to a value between 0 and 1, commonly used in binary classification tasks. The hyperbolic tangent function is similar to the sigmoid function but maps the input to a value between -1 and 1, often used in regression tasks. The ReLU function is a non-linear activation function that outputs the input if it is positive and 0 otherwise, providing better learning capabilities for deep neural networks. The softmax function is typically used in the output layer of a neural network to produce a probability distribution over multiple classes.

Multilayer Perceptron (MLP) is a type of artificial neural network that consists of multiple layers of neurons, including an input layer, one or more hidden layers, and an output layer. The input layer receives the input data, which is then passed through the hidden layers and finally produces the output through the output layer. Each neuron in the network is connected to neurons in the previous and next layers through weighted connections. The function of summation weights is to compute a weighted sum of the inputs and the activation weights introduce nonlinearity to the output of the neuron by applying an activation function.

Cognitive computing is an area of AI that aims to simulate human-like intelligence and behavior in machines. It involves technologies such as natural language processing, machine learning, and pattern recognition to enable machines to understand and interact with humans in a more natural and intelligent way. IBM Watson is a well-known example of cognitive computing, as it has been used in various domains such as healthcare, finance, and customer service. Some application cases where cognitive computing has been applied successfully include medical diagnosis, fraud detection, and personalized recommendation systems. These systems leverage the capabilities of cognitive computing to analyze large amounts of data, make informed decisions, and provide intelligent recommendations or solutions.

Need your ASSIGNMENT done? Use our paper writing service to score better and meet your deadline.


Click Here to Make an Order Click Here to Hire a Writer