1.4traditional method(svm/knn...) standard neural network
convolutional neural network(image)
recurrent neural network(sequence)
why is dl taking off?
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)
ReLU function(Rectified Linear Unit)
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)
questions: wrong answer: 4.images for cat is regarded as "unstructured data", but the images stored in computer may be "structured data" (images is numbers of pixels which are stored by structured RBG-num in computer) 8.RNNs represent the recurrent process of Idea->Code->Experiment->Idea- >.... RNNs are a model type.The iterate process is another concept.
1. Question 1
What does the analogy “AI is the new electricity” refer to?
AI runs on computers and is thus powered by electricity, but it is letting computers do things not possible before. Similar to electricity starting about 100 years ago, AI is transforming multiple industries. Correct
Yes. AI is transforming many fields from the car industry to agriculture to supply-chain...
Through the “smart grid”, AI is delivering a new wave of electricity. AI is powering personal devices in our homes and offices, similar to electricity.
2. Question 2
Which of these are reasons for Deep Learning recently taking off? (Check the three options that apply.)
Neural Networks are a brand new field. Un-selected is correct We have access to a lot more computational power. Correct
Yes! The development of hardware, perhaps especially GPU computing, has significantly improved deep learning algorithms' performance.
Deep learning has resulted in significant improvements in important applications such as online advertising, speech recognition, and image recognition. Correct
These were all examples discussed in lecture 3.
We have access to a lot more data. Correct
Yes! The digitalization of our society has played a huge role in this.
3. Question 3
Recall this diagram of iterating over different ML ideas. Which of the statements below are true? (Check all that apply.)
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)
Being able to try out ideas quickly allows deep learning engineers to iterate more quickly. Correct
Yes, as discussed in Lecture 4.
Faster computation can help speed up how long a team takes to iterate to a good idea. Correct
Yes, as discussed in Lecture 4.
It is faster to train on a big dataset than a small dataset. Un-selected is correct Recent progress in deep learning algorithms has allowed us to train good models faster (even without changing the CPU/GPU hardware). Correct
Yes. For example, we discussed how switching from sigmoid to ReLU activation functions allows faster training.
4. Question 4
When an experienced deep learning engineer works on a new problem, they can usually use insight from previous problems to train a good model on the first try, without needing to iterate multiple times through different models. True/False?
True False Correct
Yes. Finding the characteristics of a model is key to have good performance. Although experience can help, it requires multiple iterations to build a good model.
5. Question 5
Which one of these plots represents a ReLU activation function?
Figure 1:
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)
Figure 2:
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)
Figure 3:
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)
Correct
Correct! This is the ReLU activation function, the most used in neural networks.
Figure 4:
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)
6. Question 6
Images for cat recognition is an example of “structured” data, because it is represented as a structured array in a computer. True/False?
True This should not be selected
No. Images for cat recognition is an example of “unstructured” data.
False
7. Question 7
A demographic dataset with statistics on different cities' population, GDP per capita, economic growth is an example of “unstructured” data because it contains data coming from different sources. True/False?
True False Correct
A demographic dataset with statistics on different cities' population, GDP per capita, economic growth is an example of “structured” data by opposition to image, audio or text datasets.
8. Question 8
Why is an RNN (Recurrent Neural Network) used for machine translation, say translating English to French? (Check all that apply.)
It can be trained as a supervised learning problem. Correct
Yes. We can train it on many pairs of sentences x (English) and y (French).
It is strictly more powerful than a Convolutional Neural Network (CNN). Un-selected is correct It is applicable when the input/output is a sequence (e.g., a sequence of words). Correct
Yes. An RNN can map from a sequence of english words to a sequence of french words.
RNNs represent the recurrent process of Idea->Code->Experiment->Idea->.... This should not be selected
No. RNNs are a model type. The iterative process of developing DL systems is a completely separate concept.
9. Question 9
In this diagram which we hand-drew in lecture, what do the horizontal axis (x-axis) and vertical axis (y-axis) represent?
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)
x-axis is the amount of data y-axis (vertical axis) is the performance of the algorithm. Correct x-axis is the performance of the algorithm y-axis (vertical axis) is the amount of data. x-axis is the amount of data y-axis is the size of the model you train. x-axis is the input to the algorithm y-axis is outputs.
10. Question 10
Assuming the trends described in the previous question's figure are accurate (and hoping you got the axis labels right), which of the following are true? (Check all that apply.)
Increasing the training set size generally does not hurt an algorithm’s performance, and it may help significantly. Correct
Yes. Bringing more data to a model is almost always beneficial.
Decreasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly. Un-selected is correct Decreasing the training set size generally does not hurt an algorithm’s performance, and it may help significantly. Un-selected is correct Increasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly. Correct
Yes. According to the trends in the figure above, big networks usually perform better than small networks.
[coursera/dl&nn/week1]Introduction to deep learning(summary&question)