Imagine the HAL 9000, the distantly-related brothers Deep Blue & Deep Thought, the Matrix, Neuromancer, and—ahem—Siri in all in one room. Imagine another room, probably next door, where a rambly crew of manufactured units from SkyNet, Tyrell, Google, and some smaller companies sit together.
Let’s call that “class.”
That’s Artificial Intelligence Prep School for you, funded by Ray Kurzwiel, Bill Gates, and Richard Bronson. The offshoot program of M.I.T., Stanford, The Singularity Institute, and IBM. Or maybe a business run by a really forward-thinking entrepreneur somewhere.
It is, of course, fictional.
But the premise is very real, drawing from the science behind what we call emergent AI and robot laws created by science fiction authors. Emergent AI supposes that computer programs can be taught and trained like a puppy or a human child—in fact, sometimes they must be, though the methods are different from that of “Sit” or “2+2=4.” Like any skill or education, the more information you put in, the more predictive and better working of a program you get out. Computer driven pattern recognition, coupled with your unique input and an increasingly complex set of rules and self-modification, create what we think of as AI—something complicated enough that you don’t know how it works, but, you know what? It can react (and sometimes “think”) a bit like you and me.
But really: what if you could “raise” an AI?
Just like we raise our kids today, what if you could purchase a generic template for an AI & send it off to Prep School, where it would be “trained” for your specific purpose, given a steady stream of real time input (probably from the tracking around you) to better suit whatever you need it to do?
It’d be taught by professors who specialize in Machine Learning (a real field, mind you), and given a stream of constantly generated real-time scenarios to work through. Your AI could be “interned” out to employers who can tolerate a margin of error, to work out various kinks.
Before graduation, it’d be given the robot SAT: the first part would be a modified Voight-Kampff test (of Blade Runner fame) testing their ability to empathize & relate to people. Then, they’d take versions of the Turing Test (proposed by mathematician Alan Turing) to check the ability of the AI to pass as “human,” to communicate and understand effectively, as opposed to dumb programming (call it the writing portion). Last, they’d take a randomly-generated but purpose-specific Asimov Exam (formulated by writer Isaac Asimov in I, Robot), where they’d have to successfully pass a series of scenarios while obeying the three basic rules of AIs—that in whatever they do, they they may not harm a human being, must obey orders, and must have a sense of self preservation that does not conflict with the first commandments.
Do you need an android lover, a police detective, or a personal assistant? Are you looking for a very specific cool-hunter algorithm, a business partner, or a swarm of nanobot gardeners? Each could be trained for your specific purpose, saving you the time of customizing it and growing it yourself.
Their learning (the input) would obviously affect their performances, and they’d be graded on it. The Matrix, for example, may get a 100% on the Turing test, but fail miserably on the Asimov exam—back to the circuit boards with that one, then. Perhaps, catching on to the trend, other Machine Learning schools open. You’ve got competition now, with trade schools, the Ivy-Leagues and fly-by-night adware schools of the AI generation. Research labs open, pushing the boundaries of what can be done with emergent AIs, how to teach them, how they work… New AIs are bred, created, raised, trained, modified, then either turned off or—legal definitions pending—forced to be released, sold, re-purposed. At some point, AIs take on the very human role as an oppressed consciousnesses; we rewrite laws, have new biblical and evolutionary controversies, augment ourselves, and eventually change the idea of what it means to be conscious and human.
(Of course, this scenario assumes we chose to limit the networking and coordination abilities of our good friends. This is because we want to limit the fallout from an eventual malfunction or uprising. We also want to eliminate the extraneous data noise from large networks to avoid slowing down the AI operating system and to prevent a potentially homogeneous override of your program by the input of other like-functioning programs.)
These AIs could very well be traded, bought, and sold much like the human employees of today. If a company is going out of business, you might want to auction their specialized AI, like companies are absorbed today for their secret APIs or ingredients. Maybe once you no longer need your unique AI, you’ll like to pass it down to your kids too. This is all possible. (To say nothing of all the ethical issues this raises…)
This future is not very far away:
I already get a steady stream of emails from apps reminding me to do things. Google search preempts my browsing habits if I give it half a chance. Weavrs already function as trainable AI-like entities who are online always, acting on their own, liking, reading, taking pictures, documenting an experience imaginary to us. Robot dogs are already on the market from various companies, though not as good as real ones. Asian countries are in constant competition to create the must humanoid robot. Ubiquitous tracking, as Max has written about previously, already watches me and tries to be my own, albeit money-minded, Cool Hunter. Most technology writers have already made the argument, now if not 50 years ago, that we “are” our technology. Even Siri is out there, growing, learning, trying to become the ultimate personal and personalized assistant. But all of those, like kids, take so much time to train into your perfect little AI servant… what if that could be bypassed? Outsourced?
What doesn’t exist yet it the business model for an AI school/training program. But I think that potential future is closer than we think. I mean, aren’t you already bored with those bland corporate AIs and products that don’t fit your specific needs? And isn’t it really true, that you don’t have time/energy in your life to create that ideal child/offspring/project?
I read recently in the Harvard Business Review that if you can’t find the perfect candidate for the job, you should invest in making them. This is it. What if these AIs may be, one day, the children that you could never have (but not for lack of trying)?
Would you adopt a robot? Could you fall in love with one? Would you trust it’s judgement? Would you be afriad to let it make its own meaningful choices? Would you give it a fabricator and let it run wild?
Coming soon: The R2-D2 Preparatory Academy for Hi-RAM, Low-Carbon-Footprint Emerging Artificial Intelligences, courtesy of What Are These Ideas, MIT, Richard Bronson, and Google.
Roman Kudryashov is the founding editor of What Are These Ideas. Educated in design and political philosophy, he often writes about the intersection of language, society, and technology. He currently lives in New York.Contact: