【译文】

      从核弹到早产婴儿,人工智能技术已经最终成为足够可靠的监视一切的手段。在一个有血有肉的医生和一个人工智能系统之间,两者选择其一来作出疾病诊断,佩德罗·多明戈斯更乐意把自己的生命押注到人工智能系统上。佩德罗·多明戈斯是西雅图华盛顿大学的一名计算机科学家,“我宁愿相信机器也不要相信医生,”他说。考虑到人工智能(AI)通常获得的差劲口碑——过度炒作,乏善可陈——如此强烈的支持声音确实鲜见。

回到二十世纪六十年代,AI系统在复制人脑的某些关键方面似乎大有前途。通过使用数理逻辑,科学家开始重现和推理现实世界的知识,但是,很快这种方法沦为AI的枷锁。尽管数理逻辑在模拟人脑(解决问题)方面富有成效,但是它在本质上并不适合处理不确定性。

然而经过因自我枷锁造成的漫长封杀之后,AI这个广受诟病的领域却重新兴盛起来。多明戈斯并非唯一对其抱有全新信心的科学家。研究者希望通过成熟的电脑系统来检测婴儿疾病,把口头语言翻译成文本,甚至是找出恶意核爆。这些由成熟的电脑系统展现出来的早期能力就是最初在AI界引起人们广泛兴趣的东西:即使在纷繁复杂的世界,电脑仍具有像人类一样的推理能力。

处于AI复兴核心的是一种叫概率性程序的技术,它在旧有AI的逻辑基础上加入统计概率的应用。“它是两种最强大的理论的自然统一,这两种理论已经被发展来理解和推导这个世界。”史都华·罗素说,他是加州大学伯克利校区现代人工智能方面的先驱。这套强大的综合体终于开始驱散笼罩在AI漫长严冬上的迷雾。“这肯定会是一个(AI的)春天。”麻省理工学院的认知科学家约什·田纳邦说。

“人工智能(artificial intelligence)”一词于1956年由MIT的约翰·麦卡锡创造。那时,他提倡使用逻辑语言开发能进行推理的电脑系统。该方法随着所谓的一阶逻辑的应用趋于成熟。在一阶逻辑中,现实世界的知识通过使用正式的数学运算符号和标记进行模化。它为客观体世界和客观体间相互关系而设,能够用来解析他们之间的联系并得出有用的结论。例如,如果X(某人)患有高传染性的疾病Y,患者X与某人Z近距离接触,那么用这种逻辑便可推导Z患有Y疾病。

然而,一阶逻辑最大的功劳是它允许越来越复杂的模型由最小的结构模块构建起来。例如,上述情况可以轻易地延伸到建立流行病学的致死传染病模型,以及对其发展进行结论性推导。这种把微小概念不断扩展成概念集合的逻辑功能意味着人类大脑中也存在类似的思维模式。

这个好消息并没有存在得太久。“不幸的是,最终,逻辑没能实现我们的期待。”加州斯坦福大学的认知科学家诺阿·古德曼说。由于使用逻辑来表现知识并进行推理的过程要求我们对现实世界的实际知识有精确的掌握,容不得半点模糊。要么“真”要么“假”,不存在“也许”。而不幸的是,现实世界,几乎每一条规则都充满了不确定性、干扰和例外情况。简单地用一阶逻辑构建的AI系统不能处理这些问题。举例来说,你想分辨某人Z是否有疾病Y,这里的规则是清晰明白的:如果Z与X接触,那么Z患病。但是一阶逻辑不能处理Z在或者已经感染或者没有之下的情况。另一个严重的问题是,一阶逻辑不能逆向推导。例如,如果你知道Z患有疾病Y,你不可能完全确定Z的疾病是从X那里感染的。这是有医学诊断系统面临的典型问题。逻辑规则能够将疾病和症状联系起来,而一个医生面对症状却能逆推出其病因。 “这需要转变逻辑公式,而且演绎逻辑并不适合处理这种问题,”田纳邦说。

这些问题意味着到了二十世纪八十年代中叶,AI的冬天到来了。当时流行的看法是:AI毫无发展可言。然而,古德曼私下相信,人们不会放弃AI,“AI转入地下发展了,”他说。

1980年代末神经网络的到来让AI的解冻露出第一线曙光。神经网络的想法之简单让人惊叹。神经系统科学的发展带来了神经元的简单模型,加上算法的改进,研究者构建了人工神经网络(ANNs)。表面上,它能够像真正的大脑一样学习。受到鼓舞的计算机科学家开始梦想有上百万或者上万亿神经元的 ANNs。可是很快地,事实证明我们的神经元模型显然过于简单,研究者都分不清神经元的哪些方面的性质是重要的,更不用说模仿它们了。

不过,神经网络为新的AI领域构筑了一部分基础。一些继续在ANNs上奋斗的研究者终于意识到这些网络可以被认为是在统计和概率方面对外部世界的重现。与“突触”和“动作电位”这些生理学上的称呼不同,他们称之为“参数化”和“随机变量”。田纳邦说,“现在,ANNs听起来更像一个庞大的概率模型而不是一颗大脑。”

然后在1988年,加州大学洛杉矶校区的朱迪亚·珀儿写了一本里程碑式的书《智能系统的或然性推理》,里面详细地描述了AI的全新方案。支持这本书的理论是汤玛斯·贝叶斯提出的一个原理。汤玛斯·贝叶斯 是18世纪的一名英国数学家和牧师,他把以事件Q发生为前提下事件P发生的条件概率和以事件P发生为前提下事件Q发生的条件概率联系起来。这个原理提供了一个在原因和结果间来回推导的方法。“如果你能对感兴趣的不同事物用那样的方式描述,那么贝叶斯推论的数学方法会教你如何通过观察结果,然后逆推各种不同起因的可能性,”田纳邦如是说。

新方案的关键就是贝叶斯网络,一个由各种随机变量组成的模型,在这个模型里每个变量的概率分布都取决于其他变量。给定一个或多个变量的值,通过贝叶斯网络则可推导出其他变量的概率分布,换言之,得出他们的可能值 。假定这些变量表示症状、疾病和检查结果,给出检查结果(一种滤过性病毒感染)和症状(发热和咳嗽),则可给可能潜在的病因赋予不同的几率(流感,很可能;肺炎,不太可能)。

二十世纪九十年代中期,包括罗素在内的研究员开始开发算法,使贝叶斯网络能利用和学习现有的数据。这很大程度上跟人类基于早期理解的学习方式相同,新的算法却能通过更少的数据来学习更复杂和更准确的模型。对ANNs来说,这是前进的一大步,因为无需考虑先验知识,可以从头学习解决新的问题。

搜猎核武器

     人们开始逐渐理解各种努力和尝试,去创造为现实世界而设的人工智能。一个贝叶斯网络中,各种参数是概率的分布,如果我们对这个世界知道得越多,这些分布值越有用。与一阶逻辑下构建的网络不同,不完整的知识并不会导致贝叶斯网络迅速崩溃。

尽管这样,逻辑也并非无用武之地。事实证明贝叶斯网络本身并不充分,因为它不允许以简单片段任意构建复杂结构,取而代之的是一个由综合的逻辑程序和贝叶斯网络组成的,进入热门话题领域的概率性程序。

这种新AI的最前端是少数合并基础元素和所有静止研究工具的计算机语言,其中有Church语言(注:MIT科学家发明Church AI语言,目前人们使用的AI技术基本都是分别基于逻辑型AI理论或概率型AI理论两种。而基于规则的AI理论则应用前景日渐衰微,原因是这种理论的规则类型种类太多,难于计算。相比之下,基于概率的AI理论则应用更为广泛,这种技术的核心是用较大型的数据库来模拟AI,不过这种理论的缺点是很难用于较抽象的AI应用。而Goodman的Church语言则很好地融合了逻辑型AI和概率型AI这两种理论。很可能是AI和感知科学的一次重大飞跃。麻省理工学院的新闻官Larry Hardesty形象而通俗地为我们总结了这项新AI技术:“假设我们告诉这种基于Church的程序说食火鸡这种动物是属于鸟类的,那么程序会自动得出 ‘食火鸡会飞’的假设推论。不过如果我们又附加一个条件说这种动物的体重达到200磅左右,那么程序马上就会自动.前面假设为鸟类的推论,得出食火鸡虽然属于鸟类,但是不会飞的结论。”),由古德曼、田纳邦和同事开发,以某计算机程序逻辑的开创者阿隆索·丘奇命名。多明戈斯的团队开发了马尔科夫逻辑网络,融合了逻辑型网络和与贝叶斯网络相似的马尔科夫网络。罗素则和他的同事使用了一个直接明了的名字,叫“贝叶斯逻辑”(BLOG).

     最近在奥地利维也纳召开的联合国全面禁止核试条约组织(CTBTO)大会上,罗素展示了Church语言的表达能力。CTBTO邀请了罗素,因为他们预感到新的AI技术可能有助于监测核爆炸。听过一上午的关于监测地震背景下远距离核爆引发的地震特征、穿过地球的信号传播异常和世界地震站的噪音探测器的演示报告后,罗素开始着手用概率程序的设计 (神经信息处理系统前沿,卷23,麻省理工学院出版Advances in Neural Information Processing Systems, vol 23, MIT Press)。他说,“在午饭时间,我已能为整个问题编写一个完整的模型。”,这个模型足足有半页之长。

     这类模型能整合先验知识,例如,对印度尼西亚苏门塔腊和英国伯明翰地区发生地震的几率做比较。CTBTO同时要求任何一个系统首先假定发生在地球上任何地方的核爆几率均等,然后才使用来自CTBTO监测站接收的真实信号数据。AI系统要做的就是获取所有数据,对每组数据最可能的解释作出推断。

     挑战就在其中。像BLOG这样的语言是由所谓的通用推理机组成的。已知某个现实问题的模型和众多变量及概率分布,推理机只能计算某种情况的可能性,例如,在已知期望事件的事前几率和新地震数据下,推断一次在中东发生的核爆。但是如果变量改成代表症状和疾病,那么它就必定能做出医学诊断。换言之,其中的算法必须是非常普遍的,这也意味着这些算法极其低效。

结果是,这些算法不得不根据每个新问题逐一定制。但正如罗素所说,你不能每遇到一个新问题就请一个博士学生来改进算法,“那并不是你大脑的工作方式,你的大脑会赶紧适应(新问题)。”

这一点让罗素、田纳邦和其他人缓下来仔细考虑AI的前途。“我希望人们会感到兴奋,但不是那种我们向他们推销蛇油(万灵药)的感觉,”罗素说。田纳邦也有同感,尽管已是一个年过40的科学家,他觉得只有一半的机会在他有生之年见证有效推理这一难题的解决。尽管计算机将运行得更快,算法会改进得更精妙,他觉得“这些是比登月或者登火星更艰深的问题”。

无论如何,AI团体的意志并没有因此消沉。例如,斯坦福大学的达菲·柯勒正在用概率编程解决非常特殊的问题并且颇见成效。他与同在斯坦福的新生儿学专家安娜·潘和其他同事一起开发了名为PhysiScore的系统,可以预测一个早产儿是否有任何健康问题。这是个众所周知的难题,医生不能作出任何确定程度的预测,“这种预测却是对那个家庭唯一要紧事,”潘回应。

PhysiScore系统把多方面的因素考虑进去,诸如孕龄、出生体重,以及出生后数小时内的实时数据,包括心率、呼吸率和氧饱和度(Science Translation Medicine, DOI: 10.1126/scitranslmed.3001304)。“我们能够在头3个小时内得出哪些婴儿将来会健康,哪些可能患上严重的并发症,甚至是两周后会出现的并发症,”柯勒解释道。

“新生儿专家对PhysiScore这个系统感到兴奋,”潘说。作为一名医生,对于AI系统具有处理上百个变量并作出决定的能力,潘尤其满意。这种能力甚至让该系统超越了他们的人类同行。潘说:“这些工具能理解和运用一些我们医生和护士看不到的信号。”

这正是多明戈斯一直对自动化医学诊断抱有信心的原因。其中一个著名例子是“快速医学参考,决策理论(QMR-DT)”,它是一个拥有600种重要疾病和4000种相关症状模型的贝叶斯网络,其目标是根据一些症状推断可能疾病的几率。研究者已经针对特殊疾病的推理算法对QMR-DT进行微调,并且教会该系统使用病人的档案。“人们对这些系统和真人医生做过比较,这些系统似乎更胜一筹,”多明戈斯说,“人类对自己的判断,包括诊断,不能保持一致的观点(态度),而医生们不愿意放弃他们工作中这一有意思的部分是唯一让这些系统不能广泛应用的原因。”

AI领域里的这些技术还有其他成就,其中一个瞩目的例子是语音识别,它已经由过去因经常出错备受嘲笑提升到今天令人惊讶的准确度(New Scientist, 27 April 2006, p26)。现在,医生可以口述病人档案,语音系统软件会把口述档案转换成电子文档,由此可以减少手写处方。另外,语言翻译也开始仿效语音识别系统的成功之处。

会学习的机器

但是仍然有重大的挑战显现在各个领域中。其中之一就是弄明白机器人的照相机看到什么,解决这个问题将为设计出自我导航的机器人缩短一大段距离。

开发灵活和快速的推理算法的同时,研究者必须提高AI系统的学习能力,无论是根据现存数据还是现实世界检测到的新数据。今天,大部分的机器学习是由定制算法和小心地构建的数据组完成的,为教会一个系统处理特定的任务而专门设计。“我们希望那些系统更加通用,这样你可以把它们投入到现实世界,同时它们也能从各种输入信息中学习。”柯勒说。

一如既往,AI的终极目标是建造出能用我们完全理解的方式复制人类智慧的机器。“那可能是和寻找外星生命一样遥远甚至同样危险的事,”田纳邦说。“ ‘拟人AI’是一个更广义的词,有谦虚的余地。如果我们能构造一个视觉系统,像人类能做到的一样,看一眼就可告诉我们那里有什么,我们将无比高兴。”

 

【原文】

Artificial intelligence has finally become trustworthy enough to watch over everything from nuclear bombs to premature babies

GIVEN the choice between a flesh-and-blood doctor and an artificial intelligence system for diagnosing diseases, Pedro Domingos is willing to stake his life on AI. "I'd trust the machine more than I'd trust the doctor," says Domingos, a computer scientist at the University of Washington, Seattle. Considering the bad rap AI usually receives - overhyped, underwhelming - such strong statements in its support are rare indeed.

Back in the 1960s, AI systems started to show great promise for replicating key aspects of the human mind. Scientists began by using mathematical logic to both represent knowledge about the real world and to reason about it, but it soon turned out to be an AI straightjacket. While logic was capable of being productive in ways similar to the human mind, it was inherently unsuited for dealing with uncertainty.

Yet after spending so long shrouded in a self-inflicted winter of discontent, the much-maligned field of AI is in bloom again. And Domingos is not the only one with fresh confidence in it. Researchers hoping to detect illness in babies, translate spoken words into text and even sniff out rogue nuclear explosions are proving that sophisticated computer systems can exhibit the nascent abilities which sparked interest in AI in the first place: the ability to reason like humans, even in a noisy and chaotic world.

Lying close to the heart of AI's revival is a technique called probabilistic programming, which combines the logical underpinnings of the old AI with the power of statistics and probability. "It's a natural unification of two of the most powerful theories that have been developed to understand the world and reason about it," says Stuart Russell, a pioneer of modern AI at the University of California, Berkeley. This powerful combination is finally starting to disperse the fog of the long AI winter. "It's definitely spring," says cognitive scientist Josh Tenenbaum at the Massachusetts Institute of Technology.

The term "artificial intelligence" was coined in 1956 by John McCarthy of MIT. At the time, he advocated the use of logic for developing computer systems capable of reasoning. This approach matured with the use of so-called first-order logic, in which knowledge about the real world is modelled using formal mathematical symbols and notations. It was designed for a world of objects and relations between objects, and it could be used to reason about the world and arrive at useful conclusions. For example, if person X has disease Y, which is highly infectious, and X came in close contact with person Z, using logic one can infer that Z has disease Y.

However, the biggest triumph of first-order logic was that it allowed models of increasing complexity to be built from the smallest of building blocks. For instance, the scenario above could easily be extended to model the epidemiology of deadly infectious diseases and draw conclusions about their progression. Logic's ability to compose ever-larger concepts from humble ones even suggested that something analogous might be going on in the human mind.

That was the good news. "The sad part was that, ultimately, it didn't live up to expectations," says Noah Goodman, cognitive scientist at Stanford University in California. That's because using logic to represent knowledge, and reason about it, requires us to be precise in our know-how of the real world. There's no place for ambiguity. Something is either true or false, there is no maybe. The real world, unfortunately, is full of uncertainty, noise and exceptions to almost every general rule. AI systems built using first-order logic simply failed to deal with it. Say you want to tell whether person Z has disease Y. The rule has to be unambiguous: if Z came into contact with X, then Z has disease Y. First-order logic cannot handle a scenario in which Z may or may not have been infected.

There was another serious problem: it didn't work backwards. For example, if you knew that Z has disease Y, it was not possible to infer with absolute certainty that Z caught it from X. This typifies the problems faced in medical diagnosis systems. Logical rules can link diseases to symptoms, but a doctor faced with symptoms has to infer backwards to the cause. "That requires turning around the logic formula, and deductive logic is not a very good way to do that," says Tenenbaum.

These problems meant that by the mid-1980s, the AI winter had set in. In popular perception, AI was going nowhere. Yet Goodman believes that, secretly, people didn't give up on it. "It went underground," he says.

The first glimmer of spring came with the arrival of neural networks in the late 1980s. The idea was stunning in its simplicity. Developments in neuroscience had led to simple models of neurons. Coupled with advances in algorithms, this let researchers build artificial neural networks (ANNs) that could learn, ostensibly like a real brain. Invigorated computer scientists began to dream of ANNs with billions or trillions of neurons. Yet it soon became clear that our models of neurons were too simplistic and researchers couldn't tell which of the neuron's properties were important, let alone model them.

Neural networks, however, helped lay some of the foundations for a new AI. Some researchers working on ANNs eventually realised that these networks could be thought of as representing the world in terms of statistics and probability. Rather than talking about synapses and spikes, they spoke of parameterisation and random variables. "It now sounded like a big probabilistic model instead of a big brain," says Tenenbaum.

Then, in 1988, Judea Pearl at the University of California, Los Angeles, wrote a landmark book called Probabilistic Reasoning in Intelligent Systems, which detailed an entirely new approach to AI. Behind it was a theorem developed by Thomas Bayes, an 18th-century English mathematician and clergyman, which links the conditional probability of an event P occurring given that Q has occurred to the conditional probability of Q given P. Here was a way to go back-and-forth between cause and effect. "If you can describe your knowledge in that way for all the different things you are interested in, then the mathematics of Bayesian inference tells you how to observe the effects, and work backwards to the probability of the different causes," says Tenenbaum.

The key is a Bayesian network, a model made of various random variables, each with a probability distribution that depends on every other variable. Tweak the value of one, and you alter the probability distribution of all the others. Given the value of one or more variables, the Bayesian network allows you to infer the probability distribution of other variables - in other words, their likely values. Say these variables represent symptoms, diseases and test results. Given test results (a viral infection) and symptoms (fever and cough), one can assign probabilities to the likely underlying cause (flu, very likely; pneumonia, unlikely).

By the mid-1990s, researchers including Russell began to develop algorithms for Bayesian networks that could utilise and learn from existing data. In much the same way as human learning builds strongly on prior understanding, these new algorithms could learn much more complex and accurate models from much less data. This was a huge step up from ANNs, which did not allow for prior knowledge; they could only learn from scratch for each new problem.

Nuke hunting

The pieces were falling into place to create an artificial intelligence for the real world. The parameters of a Bayesian network are probability distributions, and the more knowledge one has about the world, the more useful these distributions become. But unlike systems built with first-order logic, things don't come crashing down in the face of incomplete knowledge.

Logic, however, was not going away. It turns out that Bayesian networks aren't enough by themselves because they don't allow you to build arbitrarily complex constructions out of simple pieces. Instead it is the synthesis of logic programming and Bayesian networks into the field of probabilistic programming that is creating a buzz.

At the forefront of this new AI are a handful of computer languages that incorporate both elements, all still research tools. There's Church, developed by Goodman, Tenenbaum and colleagues, and named after Alonzo Church who pioneered a form of logic for computer programming. Domingos's team has developed Markov Logic Network, combining Markov networks - similar to Bayesian networks - with logic. Russell and his colleagues have the straightforwardly named Bayesian Logic (BLOG).

Russell demonstrated the expressive power of such languages at a recent meeting of the UN's Comprehensive Test Ban Treaty Organization (CTBTO) in Vienna, Austria. The CTBTO invited Russell on a hunch that the new AI techniques might help with the problem of detecting nuclear explosions. After a morning listening to presenters speak about the challenge of detecting the seismic signatures of far-off nuclear explosions amidst the background of earthquakes, the vagaries of signal propagation through the Earth, and noisy detectors at seismic stations worldwide, Russell sat down to model the problem using probabilistic programming (Advances in Neural Information Processing Systems, vol 23, MIT Press). "And in the lunch hour I was able to write a complete model of the whole thing," says Russell. It was half a page long.

Prior knowledge can be incorporated into this kind of model, such as the probability of an earthquake occurring in Sumatra, Indonesia, versus Birmingham, UK. The CTBTO also requires that any system assumes that a nuclear detonation occurs with equal probability anywhere on Earth. Then there is real data - signals received at CTBTO's monitoring stations. The job of the AI system is to take all of this data and infer the most likely explanation for each set of signals.

Therein lies the challenge. Languages like BLOG are equipped with so-called generic inference engines. Given a model of some real-world problem, with a host of variables and probability distributions, the inference engine has to calculate the likelihood of, say, a nuclear explosion in the Middle East, given prior probabilities of expected events and new seismic data. But change the variables to represent symptoms and disease and it then must be capable of medical diagnosis. In other words its algorithms must be very general. That means they will be extremely inefficient.

The result is that these algorithms have to be customised for each new challenge. But you can't hire a PhD student to improve the algorithm every time a new problem comes along, says Russell. "That's not how your brain works; your brain just gets on with it."

This is what gives Russell, Tenenbaum and others pause, as they contemplate the future of AI. "I want people to be excited but not feel as if we are selling snake oil," says Russell. Tenenbaum agrees. Even as a scientist on the right side of 40, he thinks there is only a 50:50 chance that the challenge of efficient inference will be met in his lifetime. And that's despite the fact that computers will get faster and algorithms smarter. "These problems are much harder than getting to the moon or Mars," he says.

This, however, is not dampening the spirits of the AI community. Daphne Koller of Stanford University, for instance, is attacking very specific problems using probabilistic programming and has much to show for it. Along with neonatologist Anna Penn, also at Stanford, and colleagues, Koller has developed a system called PhysiScore for predicting whether a premature baby will have any health problems - a notoriously difficult task. Doctors are unable to predict this with any certainty, "which is the only thing that matters to the family", says Penn.

PhysiScore takes into account factors such as gestational age and weight at birth, along with real-time data collected in the hours after birth, including heart rate, respiratory rate and oxygen saturation (Science Translation Medicine, DOI: 10.1126/scitranslmed.3001304). "We are able to tell within the first 3 hours which babies are likely to be healthy and which are much more likely to suffer severe complications, even if the complications manifest after 2 weeks," says Koller.

"Neonatologists are excited about PhysiScore," says Penn. As a doctor, Penn is especially pleased about the ability of AI systems to deal with hundreds, if not thousands, of variables while making a decision. This could make them even better than their human counterparts. "These tools make sense of signals in the data that we doctors and nurses can't even see," says Penn.

This is why Domingos places such faith in automated medical diagnosis. One of the best known is the Quick Medical Reference, Decision Theoretic (QMR-DT), a Bayesian network which models 600 significant diseases and 4000 related symptoms. Its goal is to infer a probability distribution for diseases given some symptoms. Researchers have fine-tuned the inference algorithms of QMR-DT for specific diseases, and taught it using patients' records. "People have done comparisons of these systems with human doctors and the [systems] tend to win," says Domingos. "Humans are very inconsistent in their judgements, including diagnosis. The only reason these systems aren't more widely used is that doctors don't want to let go of the interesting parts of their jobs."

There are other successes for such techniques in AI, one of the most notable being speech recognition, which has gone from being laughably error-prone to impressively precise (New Scientist, 27 April 2006, p26). Doctors can now dictate patient records and speech recognition software turns them into electronic documents, limiting the use of manual transcription. Language translation is also beginning to replicate the success of speech recognition.

Machines that learn

But there are still areas that pose significant challenges. Understanding what a robot's camera is seeing is one. Solving this problem would go a long way towards creating robots that can navigate by themselves.

Besides developing inference algorithms that are flexible and fast, researchers must also improve the ability of AI systems to learn, whether from existing data or from the real world using sensors. Today, most machine learning is done by customised algorithms and carefully constructed data sets, tailored to teach a system to do something specific. "We'd like to have systems that are much more versatile, so that you can put them in the real world, and they learn from a whole range of inputs," says Koller.

The ultimate goal for AI, as always, is to build machines that replicate human intelligence, but in ways that we fully understand. "That could be as far off, and maybe even as dangerous, as finding extra-terrestrial life," says Tenenbaum. "Human-like AI, which is a broader term, has room for modesty. We'd be happy if we could build a vision system which can take a single glance at a scene and tell us what's there - the way a human can."

Anil Ananthaswamy is a consultant for New Scientist

Chris评:

     随着科技的进步AI终极目标终能实现,但那必定是一条遥远艰辛的路,那时人类也许就进入了一个全新的时代,那些拥有人的复杂思维的机器是否将取代人类在这个地球的地位,那些所谓的道德伦理、自然规律的界限可能会随着新技术新发明的出现慢慢变得模糊。无疑,对于科学界来说,这些伟大的创造对社会,对国家,乃至对人类创造了一大笔财富,在技术上有了历史性的跨越;然万不可忽视了这一把双刃剑所造成的隐患。。。。。。