top of page

Will AI Ever Think Like a Human?

Read this exchange below from a recent 60 Minutes episode featuring the man who may know the most about Artificial Intelligence in the world. I find this absolutely fascinating. The people who know the most about subjects in science that deal with the nature of reality and consciousness feel very differently than those who know less about it but still work in the field. The ones with the most knowledge and wisdom at the top of the field understand that it is possible AI will NEVER think like a human because there is something irreducible about consciousness, which remains a mystery. There is no model that shows consciousness arises from brains or matter. Therefore, it is possible that AI will never become conscious, self-aware and thus able to become autonomous or self-directing, i.e., computers and robots may NEVER be able to have a will of their own. Most of those who are in the field at varying other levels subscribe to the “machines will become conscious in a singularity” dogma popularized by people like Ray Kurzweil. Ironically, such views appear to be based more on faith, a faith in science and materialism, rather than empiricism. Kai-Fu Lee didn’t reach the top of the AI field for nothing. He knows more about it than anyone on the planet, and he knows what he doesn’t know and the limitations of human endeavor. This sets him apart in a big way.

Scott Pelley: When will we know that a machine can actually think like a human?

Kai-Fu Lee: Back when I was a grad student, people said, "If machine can drive a car by itself, that's intelligence." Now we say that's not enough. So, the bar keeps moving higher. I think that's, I guess, more motivation for us to work harder. But if you're talking about AGI, artificial general intelligence, I would say not within the next 30 years, and possibly never.

Scott Pelley: Possibly never? What's so insurmountable?

Kai-Fu Lee: Because I believe in the sanctity of our soul. I believe there is a lot of things about us that we don't understand. I believe there's a lot of love and compassion that is not explainable in terms of neural networks and computation algorithms. And I currently see no way of solving them. Obviously, unsolved problems have been solved in the past. But it would be irresponsible for me to predict that these will be solved by a certain timeframe.

Scott Pelley: We may just be more than our bits?

Kai-Fu Lee: We may.

bottom of page