Jan. 2nd, 2016

heron61: (Look to the future)
Breakthroughs in artificial intelligence have begun making the news, and while anything close to human intelligence, or for that matter the intelligence of any vertebrate is a ways in the future, the recent advances are fairly impressive, in ways they haven’t previously been.

It therefore unsurprising that concern about the dangers of AI is in the news for the first time. I share a few of these concerns – autonomous weapons (which the US Air Force is considering ) are from my PoV an astoundingly stupid and terrible idea, not because intelligent machines would use them to kill us all, but because a single software glitch can result in lots of dead humans.

However, I’ve always been deeply suspicious of the sort of fear and occasionally even panic about human and superhuman level AI found on sites like Less Wrong or described somewhat more sensibly here, and in greater detail, here.

I’ve read counterarguments against the risk of AI by Charles Stross and in this interesting and excellent piece. However, none of them felt like they fully addressed the feeling I had that the entire debate was silly and pointless. Then, when reading the “Should AI be Open” article linked to above I had an epiphany – for any of the “AI Risk” arguments about the inherent dangers of superhumanly intelligent AI to make sense, you need to posit a hard-takeoff singularity.

Without that, then absolutely none of the arguments make sense, because instead of run-away superintelligence swiftly becoming unknowable and unstoppable, you have a slow and difficult process of teams of humans and one or more human-intelligence AI slowly working to find ways to increase AI intelligence, and then many months or more likely, at least several years after creating an AI as intelligent as an average human, you have one as intelligent as one of the smartest humans, and then at least a few years after that (if not significantly longer) someone finally learns how to make an AI more intelligent than any human who has ever lived. Given that every other recent technological advance required considerable effort and time, it seems impressively unlikely that AI will prove any different, especially since it’s already proven to be exceedingly difficult. It’s not like a human-level AI is going to have all that much better idea about how to make a more intelligent AI than the people who created it. Also, many of the “AI Risk” scenarios require even more than a hard takeoff singularity, they also require self-replicating nanotechnology of the sort that can swarm over the planet, and which breaks a few physical laws and would likely end up being eaten by far older and more determined nanotechnology (ie existing microscopic lifeforms). It seems to me that the basis of the fear of AI by intelligent well-educated IT professionals comes down to seeing a sort of AI that is more at home in a grim version of Disney’s The Sorcerer’s Apprentice, rather than anything that anyone has any actual evidence will or even could exist.

In any case, I suspect that in less than five years we’ll have software will not be in any way conscious or intelligent, but which can fool most people into thinking it is, since humans are easy to fool, and eventually – perhaps in 20-50 years, something like true human-level artificial intelligence will exist, but creating it will be a slow and difficult process, as well creating something smarter than it.

May 2017

S M T W T F S
 123456
7891011 12 13
141516 171819 20
212223 24252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 29th, 2017 03:54 am
Powered by Dreamwidth Studios