Thalience; the successor to Science?
What if you could separate the activity of science from the human researchers who conduct it? Automate it, in fact? Imagine creating a bot that does physics experiments and builds an internal model of the world based on those experiments. It could start out as something simple that stacked blocks and knocked them over again. Later models could get quite sophisticated; and let's say we combine this ability with the technology of self-reproducing machines (von Neumann machines). Seed the moon with our pocket-protector-brandishing AIs and let them go nuts. Let them share their findings and refine their models.
So far so good. Here's the question that leads to the notion of thalience: if they were allowed to freely invent their own semantics, would their physical model of the universe end up resembling ours? -I don't mean would it produce the same results given the same inputs, because it would. But would it be a humanly-accessible theory?
In Ventus, of course, the thalient system has lost the ability to communicate with humans; but the end of the novel holds out the hope that some sort of bridge can be constructed. Strangely, this bridge appears in the form of politics, rather than as a meeting of minds through Reason or Mathematics.
So writes Karl Schroeder, elaborating on the concept of "Thalience" from his novel, Ventus. Interesting spur to the imagination, no doubt, but I take a few issues with this idea.
First off, there's a big difference between science and discovery. The scientific method is exactly that: a method, and not an end in itself. Meanwhile, scientific discovery is based on figuring out solutions to problems, mapping cause and effect chains. The two are combined ingeniously in the cover word science, but are not the same.
As an example: Google makes a self-driving off-road car. Does it then start driving itself around & picking up groceries or going on road trips? No. It only starts doing that when given a command to do so; problem to be solved is get to destination 'A' in shortest time and/or space. Along the way, the self-driving car will no doubt encounter obstacles, traffic, turns, and more. At this point, complex decision trees call upon the car's infrastructure, latent networks, and preprogrammed instructions to resolve those problems. Traffic jam?- activate proximity sensors. Flood warning?- avoid route. Flat tire?- report to nearest "auto mechanic" as shown on web geo search. Running late?- prioritize objective of 'shortest space' instead. And so on.
The car doesn't exactly break ground here, but a combination of various instructions and obstacles can lead to some novel results, such as when a self-charging robot, running low on battery, encounters a person it is supposed to serve blocking a socket. Asimov, your three laws are calling.
At which point, then, would some version of "Occam's Razor" come into play for these autonomous machines ("this explanation works & we need to explore no further")?- I predict around the same point as when those machines allow for static trend speculation ("this repeatedly happens, so therefore it will be a causal law"). Or, at which point would a machine know that we need to magnify an object to find its cellular composition, if that cellular composition isn't important to the subject at hand? Can a machine be programmed for the capacity of learning new things outside of a direction driven algorithm?- of taking happy accidents & spinning new models, instead of discounting those initial 'failure' results as irrelevant to the question at hand.I'm sure there's an argument somewhere that a properly coded directive of "understand laws of the universe" is possible, and an autonomous network of robots can go around testing causal chains using the scientific method, but scientific inquiry often leads to dead ends that require creative leaps or speculation to even know what to test in the first place.
That's asking, would we have figured out quantum mechanics via trying to figure out relativity? I'm not informed enough to answer, but seems like a long shot (improbable, not impossible). Moreover, we need new tools to even get to those scales, and how likely would it be for even a complex network to create the required tools, things both like microscopes and atomic colliders? Even were super complex autonomous networked self-reproducing machines capable of such creative leaps, how likely would they be to actually create them, given physical constraints? Assuming any sort of resource scarcity, would those machines also be capable of developing the most economic and sustainable methods of resource and project allocation to offset the necessary politics of scarcity?
I'd be fascinated to see all of this happen, and I'm not being cynical or sarcastic. But if you're still with me throughout all of this hypothetical mumbo-jumbo, then I've got a more interesting question:
At which point would an automated science attempt to resolve confounding factors (especially as the scale of questions/problems gets bigger) & get itself into social sciences? At which point would a machine need to account for the "human element," and find itself attempting to account for the more complex and dynamic social realm? And if it does, is that an implication that the social sciences can be automated & predicated & modeled in the same way?
To me, that's the more far-reaching question, and I find it interesting that Schroeder accounts for the basic translation problem between those automated machines and humanity as a political solution. There's a kitchen sink of implications there, considering this isn't just a guy who writes sci-fi, but also a consultant called upon by the Canadian armed forced to craft potential future scenarios for planning & has a Master's degree in strategic foresight, assuming that means anything. In a sense, it's a shadow of that all cooperation and communication is political, which most readers would find hard to disagree with.
And the last question left, and perhaps the most paradoxical one, is that if we are able to create such an autonomous network of multifaceted machines able to problem-solve and reproduce, aren't we essentially recreating life as we know it? The genome is that sort of open-ended code for reproduction and evolution, and machine learning is modeled very much on the neural networks in our heads, and all of this is brought full circle in the computational theory of mind. To say nothing of tool use, inter-specie interaction, and resource scarcity of the natural world. Schroeder's hypothetical thalience is nothing more than an artificial god-game then, a clever but ultimately meaningless plot-twist. Sure, you can automate science, but the complex environment within which it will happen, dictated by all sorts of resource scarcity & allocation between creative pursuits is already here. Just pretend we're machines.
Paradox? Nah. But maybe we should be looking at machine- or automated-politics instead. How do you rationally decide and allocate actions between competing ideologies, when two things are hypothetically simultaneously equally right and equally wrong? Maybe we'd be able to predict of the results of the Romney-Obama election then, more political science than politics.
& at the risk of going on for way too long & overstepping my usefulness, I want to throw out this last idea, as a final coda to all of our talk about science & understanding the world, and an open call for discussion:
"I once overheard the historian and radical pedagogue Hayden White say something that resembled the following statement: 'The misunderstanding is that science is considered more precise than poetry.' White's claim encourages that the world be perceived outside that which is solely measurable. While an inch remains an inch yesterday, today, and tomorrow, a poem capture how the poet feels at a precise moment in time. Following from this logic, the measurement of the world inch by inch tells us little about the myriad poetic precisions that make up the world we inhabit." (Yasushi Tanaka-Gutiez, Shoppinghour Magazine #8)
Machine poetics & artificial god-games indeed.
Roman Kudryashov is the founding editor of What Are These Ideas. Educated in design and political philosophy, he often writes about the intersection of language, society, and technology. He currently lives in New York.Contact: