laitimes

5 things I learned in a year at the MIT Artificial Intelligence Research Lab

5 things I learned in a year at the MIT Artificial Intelligence Research Lab

With a little practice, science can profoundly affect a person's worldview.

Author | Liu Bingyi, Ailleurs

Edit | Chen Caixian

Mike Ferguson, MIT's Department of Brain and Cognitive Sciences (MIT BCS) works as a Research Software Engineer/ML Engineer. Specializes in Brain-Score, a tool for measuring brain-like AI. He graduated from the University of Virginia in spring 2021 with a bachelor's degree in computer science and applied mathematics, and a bachelor's degree in cognitive science and philosophy.

5 things I learned in a year at the MIT Artificial Intelligence Research Lab

Caption: Mike Ferguson

In this article, Mike shares 5 things he learned during the year at the MIT Ai Lab, including some of his thoughts on life, success, and knowledge, that he hopes will be interesting or useful to you.

1

Admit your blind spots and question everything

Mike, who had just graduated from UVA with a major in computer science and cognitive science and a minor in philosophy and mathematics before starting his career at MIT, felt good about himself, however, when he first attended the MIT Weekly Meeting, he was dumbfounded—he found himself understanding at most about 10-20% of the discussion, and for the next few weeks he was wondering about life: Was his IQ too low to deserve to be at MIT? Why does it look like you don't understand?

Mike noticed that the smartest people in the lab were constantly asking questions, and in the first week alone, he met 5 or 6 people who studied the intersection of AI and neuroscience, and studied the intersection of AI and neuroscience longer than he had been alive. Even after decades of research with the utmost concentration in the field, even to the absolute peak, they are still constantly asking questions, solving problems, and testing hypotheses.

He understood that his purpose in coming to MIT was to constantly solve what he didn't understand. He gave up pretending to be himself, freely admitting that he was not aware of the work that was currently progressing.

Never stop asking questions, and every question represents an opportunity to close the gap in understanding and improve your knowledge. It is precisely by fully understanding your existing knowledge and thinking about what the opposite is that you will expand the boundaries of your knowledge. Always supporting a colleague's point of view and always wanting others to know how smart they are is is a sign of insecurity. In a safe environment without cognitive conflict, one would only put oneself in the position of boiling frogs in warm water.

Don't think about why you're asking questions, but don't stop asking questions. Don't you feel awe when you think about the mysteries of the wonderful structures behind eternity, life, and reality? This is the miracle of the human mind—using its structures, concepts, and formulas as tools to explain what humans see, feel, and touch.

Now, he's cultivated that if asked a question, he'll respond quickly by saying "I'm not sure, I have to investigate" or "Good advice, I have to do more experiments to confirm."

2

Sometimes it's better to be blunt

Don't whitewash — it's only going to hinder scientific progress. We don't have time for that. ”

When he was told about his position in the MIT lab, Mike thought of the professors he had met during his undergraduate years who had received his bachelor's and doctoral degrees from MIT's EECS. He ran to the professor with a bunch of questions: How about MIT? Cultural similarities and differences with UVA? What's going on with the weird-looking Tim Beaver? Why are prices so expensive in Boston...

The professor told him a lot of great tricks, but what he particularly remembered was his "warning": "At MIT, bluntness is everywhere. If you have a stupid idea, people will tell you. If you're not good at what you're doing, people will tell you too; if your assumption is garbage, the other person will point it out to you, whether in a room of several people. ”

Mike took a small notebook and learned it when he held his first lab meeting a few months later... He had some ideas that were told to be immature; he made a technical mistake and was called out directly. This is what happens to everyone at MIT — whether you've published 13 papers in Science or never. It seems like a culture that you'll encounter at MIT. In fact, if an audience keeps interjecting and asking questions, it's even seen as a sign of respect – meaning they're interested! If your presentation is not interrupted, it can be a tedious thing.

The exploration of knowledge and the advancement of the frontiers of science is sacred at MIT, and this ability to obtain frank, objective feedback is particularly admired. At MIT, the time and place of straightforward communication is anytime, anywhere, and you can focus on your work without worrying about the criticism being on you, they're just a criticism of your work. Over the past few months, Mike has sought this blunt and objective feedback, providing maximum "value for money" over time and in terms of gaining knowledge in the field.

We try to learn as much as we can, so why not embrace the intuitive feedback of criticism?

3

Apprentice mentality

"Repeated failures will make you mentally strong and show you in absolute clarity how you have to do it."

Mike has a Book-a-Week challenge that he's been up to for over 3 years. In the past four years, I have read more than 170 books on artificial intelligence, philosophy, and the meaning of being human.

What he learned from the book was that to be a master of something, to truly understand a field and make an impact, one had to go through all stages of development. After completing your formal education, you can move on to the "apprenticeship" stage and must learn how and how things are done (whether explicit or implicit). Lasts from 3 to 10 years, and then moves on to the creative phase, where you can expand and develop your creativity and independence. Finally, you move on to the mastery stage, and mastering a discipline or field is an investment. Reach your full potential in a meaningful way by mastering one discipline. It's an investment in future happiness and achievement, and a way to avoid getting stuck in a dead end or feeling unhappy as you get older.

In the field of delving into artificial intelligence/neuroscience, Mike feels like he's in the apprenticeship phase, in the words of his favorite author Robert Greene, "embracing the ideal apprenticeship." Ask questions, eagerly seek knowledge, and never feel superior when learning things – anything relevant to one's own field, even seemingly unrelated, is worth learning.

4

Become an autonomous worker

Whether AI can experience emotions is a very controversial topic, and he has written many articles that annoy his lab partners, and is not yet close to the answer, "All I know is that we are human beings with thousands of years of evolutionary heritage." Our emotions or thoughts such as happiness, sadness, hope, victory and defeat are very unique. They're exactly what makes us human, and they're things that're hard to replicate very quickly in artificial intelligence."

Our brains malfunction more ways than they function properly, and dopamine levels can get out of control, develop lesions, lose signals, or be improperly redirected... The list of faults is almost endless, we all make mistakes, it's a perfectly ordinary thing, all our emotions are valuable, an important part of what distinguishes people from brain-like systems and machines.

On this beautiful planet, we have always been a sentient being, a thinking animal, and this in itself is a great privilege and adventure.

Think of all the stories that have been forgotten by history before, the themes of survival, love, suffering, adversity and other themes have echoed through the centuries, and the unique thinking is the eternal and unique remembrance of time and space. So, no matter what else happens in your life, for better or worse, no matter the daily boring survival tasks, no matter your personal gains and losses: just remember that being a conscious, working person is a great feat.

5

Science is a way of thinking, not a body of knowledge

The recent rise of an "anti-science" ethos across the United States is in many ways very disturbing. Carl Sagan, who in 1996 had predicted this phenomenon surprisingly:

I have a hunch about the America of my children's generation – at that time, America was a service and information economy, and almost all manufacturing was transferred to other countries; that the awesome technological power was in the hands of a very small number of people, and that those who represented the public interest could not even understand these issues; that people lost the ability to set their own agendas or to question those in power wisely; that people's critical capacity declined, that credulous statements about pseudoscience and superstition were rampant, and that people slid back into superstition and darkness almost imperceptibly...

—Carl Sagan, "A World Of Demons: Science Is Like a Candle in the Dark"

A skepticism about the scientific enterprise itself seems to be gaining popularity, how to counter this "anti-science wind"? Mike offers some insights based on what he has observed so far at MIT.

First, that's what the first chapter above said — questioning everything. Nothing is exempt from scrutiny and reasonable suspicion. When you see an article, first look at who wrote it, see if their previous work, has capital to push. Before drawing conclusions, cross-reference is made to the sources for confirmation. Ask others why they are arguing and what they can get. If the argument has biased history, then it is likely that you will only see one side of the matter.

Second, analyze arguments, look for common errors in logic, such as personal attacks, illogical reasoning, selection and confirmation bias (of which selective preference is the most important because its far-reaching effects are difficult to detect); follow the author's process of making arguments, ensuring that the arguments are philosophically correct and sound (the conclusions are derived from the premises); and wary of false suggestions, unfounded claims, and artificially controlled chart data To seek evidence for all assertions, something that can be asserted without evidence can be dismissed without evidence.

Finally, recognize that people make mistakes. Data are often incomplete or biased, and the emergence of new evidence may shock the original argument. Thoughts can and should be changed. The mature approach is to let the old ideas disappear when confronted with new facts and admit any mistakes made.

5 things I learned in a year at the MIT Artificial Intelligence Research Lab

Image credit: Greg Rakozy

Mike hopes these suggestions will help us find our way in a seemingly "post-truth" world, learn to dig deeper into arguments, and analyze the way conclusions are reached. Science is a way of thinking, a delicate line between open-mindedness and skepticism. The point is that with a little practice, science can profoundly affect a person's worldview.

Reference Links:

https://towardsdatascience.com/5-things-i-have-learned-working-in-an-mit-ai-research-lab-for-a-year-a65b4fcaef31

END

Read on