题型:阅读理解 题类:常考题 难易度:普通
北京师大附中2018-2019学年高二上学期英语期末考试试卷
Let us all raise a glass to AlphaGo and the advance of artificial intelligence. AlphaGo, DeepMind's Go-playing AI, just defeated the best Go-playing human, Lee Sedol. But as we drink to its success. We should also begin trying to understand what it means for the future.
The number of possible moves in a game of Go is so huge that in order to win against a player like Lee. AlphaGo was designed to adopt a human—like style of gameplay by using a relatively recent development--deep learning. Deep learning uses large data sets, “machine learning” algorithms (计算程序) and deep neural networks to teach the AI how to perform a particular set of tasks. Rather than programming complex Go rules and strategies into AlphaGo, DeepMind designers taught AlphaGo to play the game by feeding it data based on typical Go moves. Then, AlphaGo played against itself, tirelessly learning from its own mistakes and improving its gameplay over time. The results speak for themselves.
Deep learning represents a shift in the relationship humans have with their technological creations. It results in AI that displays surprising and unpredictable behaviour. Commenting after his first loss, Lee described being shocked by an unconventional move he claimed no human would ever have made. Demis Hassabis. one of DeepMind's founders, echoed this comment:“We're very pleased that AlphaGo played some quite surprising and beautiful moves.”
Unpredictability and surprises are—or can be—a good thing. They can indicate that a system is working well, perhaps better than the humans that came before it. Such is the case with AlphaGo. However, unpredictability also indicates a loss of human control. That Hassabis is surprised at his creation's behaviour suggests a lack of control in the design. And though some loss of control might be fine in the context of a game such as Go, it raises urgent questions elsewhere.
How much and what kind of control should we give up to AI machines? How should we design appropriate human control into AI that requires us to give up some of that very control? Is there some AI that we should just not develop if it means any loss of human control? How much of a say should corporations, governments, experts or citizens have in these matters? These important questions, and many others like them, have emerged in response, but remain unanswered. They require human, not human - like, solutions.
So as we drink to the milestone in AI, let's also drink to the understanding that the time to answer deeply human questions about deep learning and AI is now.
试题篮