The U.S. Tennis Association just wanted to fix a few cracks in its Luis Armstrong Stadium. So what are New York City community boards, the last vestige of a millenia-old democratic process, getting worked up about?Read More
The U.S. Tennis Association just wanted to fix a few cracks in its Luis Armstrong Stadium. So what are New York City community boards, the last vestige of a millenia-old democratic process, getting worked up about?Read More
Hello and welcome to first edition of the Weekender of the New Year.Read More
When researching for a project or a new article, we often come across many great reads that are worth sharing. These often turn into heavily annotated notes or even whole segments, left on the cutting floor. So we’re introducing the Reading List, a new segment of annotated research and interesting reads that we’ve done on a topic. Where applicable, we’ve also linked to personally annotated and highlighted copies. Help yourself, whether as a cheatsheet or an actual reading list. To be updated periodically.Read More
Bumper sticker spotted today: “I live in a society, not an economy”
— Karl Schroeder (@KarlSchroeder) October 9, 2012
Two recent events have led the debate about America’s economic future: the first has been general election for our next President; the second has been Hurricane Sandy, its devastating effects, and how to deal with such events in the future. The two are intricately tied, and not just because “climate science” has become a political issue. Hurricane Sandy has exposed the need for infrastructural investment in this country, and that goes to the core of our economic issues, as well as the difference between what an economy is and what a market is.
By Roman Kudryashov
Many voters and pundits have been making the case against President Obama’s reelection because he, as President, should have fixed the economic climate. The economy is only slightly better than that of four years ago, and four years ago was a pretty low point to compare to. The reluctance is understandable. However, it misses some really important points about how economics works.
Economics is not a science, however much people would like to believe. You can’t “test” economic theories the way you can in other fields. While we have anecdotal data and see real world applications of economic ideas, there are too many confounding factors (like, everything that happens in the world) to take any economic “experiment” seriously. Economics is a system of very many moving parts between very many parties; even so-called isolated economies, like that of North Korea, don’t “prove” anything since they’re impacted by events outside of their countries (international politics, populations, environmental factors, and so forth). Even the weather can have profound implications on economic situations: consider a drought or a hurricane.Read More
“This is a box. Your turn.”
That’s how we’re starting it. For two years, What Are These Ideas has been publishing original and thought-provoking pieces, asking readers to think again about everything from genetics to social technology to today’s news. Today, we’re putting out our first contest, on the theme of “Think Outside of the Box.”
The premise is simple. Submit a drawing, illustration, photograph, or a series of any of those on the theme of “Think Outside of the Box,” to firstname.lastname@example.org – the winning pieces (and there’ll be multiple) will be published as part of our forth-coming app & print book of the same name. Winner will get a copy of the book (once printed), and of course, credit.
The purpose of the contest is two fold.
So… that’s our first box. Your turn.
Submit to email@example.com — For images to work on our app, they must be simple enough to understand on a phone screen. For images to work within our print book, they must be at least 300 dpi. Contest runs until December 1st.Read More
What does it mean to accept the infamous Oslo terrorist as “sane?” How do we cope, and what should we ask ourselves when normal men commit heinous crimes?
By Roman Kudryashov
On August 24th, 2012, a Norwegian court found Anders Behring Breivik sane and guilty of 77 murders. He was imprisoned for a maximum sentence of 21 years, and to a quiet cheer, a dark stain of recent history was resolved.
Outside of the actual murder and bombing spree, there another little controversy: was he sane or not? Before he went on his rampage, Breivik put forth a long, rambling, and somewhat persuasive (in radical circles) manifesto similar in spirit (though not substance) to that of Ted Kaczynksi, infamous Unabomber. In response, the court initially ruled that Breivik was a paranoid schizophrenic; both Breivik and the prosecution argued otherwise. In a quote that should go down in history, Tore Sinding Bekkedal, one of the survivors, said: “I believe he is mad, but it is political madness and not psychiatric madness.” So the decision was reversed: Breivik was sane (though also deemed “narcissistic” by psychiatric teams associated with the case), and guilty. The court closed with the maximum punishment in Norwegian law.
And Tore Bekkedal is right. To have declared Breivik insane would have been to absolve him of guilt. He is, as Bekkedal continued, “a sad and pathetic person,” a fringe extremist, or in some circles, a revolutionary.Read More
Everyone agrees: politics are disappointing. Corruption, lobbyists, unresponsive representatives, deadlocks, endless debt, partisanship, social and financial issues, ridiculous laws and rulings, and an endless barrage of misleading statements make for a pretty… well, shitty time.
So, I asked, “Can we reinvent political participation?” Can we make politics relevant and meaningful again? How?Read More
Can monkeys with typewriters teach us about computers, cryptology, and artificial intelligence? Yes, yes they can.
By Roman Kudryashov
1. Fixing the Problem of Monkeys
It was all very simple. Give an infinite number of monkeys an infinite number of typewriters, set them to work for an infinite amount of time, and they’re bound to replicate the complete works of Shakespeare. Eventually. Pure dumb luck, but given the time, it could happen. Not likely by any stretch, but statistically possible. So I figured: could we make it more likely?
I propose that it is more statistically likely for monkeys to randomly write Shakespeare in binary code than in regular English.
To remind everyone, binary code is the most basic language of computers, the 100110s that make decoding the green rain in the Matrix seem simple. Generally speaking, a single letter in binary code takes 8 (!) digits to write: “a” would be 01100001; “A” would be 01000001.
The complete works of Shakespeare, as found on Project Gutenberg, are 5,137,094 characters, counting spaces and punctuation. But in binary, this would be approximately 41 million characters, eight times longer than using the English alphabet. How exactly does that make it more likely than before?Read More
Imagine the HAL 9000, the distantly-related brothers Deep Blue & Deep Thought, the Matrix, Neuromancer, and—ahem—Siri in all in one room. Imagine another room, probably next door, where a rambly crew of manufactured units from SkyNet, Tyrell, Google, and some smaller companies sit together.
Let’s call that “class.”
That’s Artificial Intelligence Prep School for you, funded by Ray Kurzwiel, Bill Gates, and Richard Bronson. The offshoot program of M.I.T., Stanford, The Singularity Institute, and IBM. Or maybe a business run by a really forward-thinking entrepreneur somewhere.
It is, of course, fictional.
But the premise is very real, drawing from the science behind what we call emergent AI and robot laws created by science fiction authors. Emergent AI supposes that computer programs can be taught and trained like a puppy or a human child—in fact, sometimes they must be, though the methods are different from that of “Sit” or “2+2=4.” Like any skill or education, the more information you put in, the more predictive and better working of a program you get out. Computer driven pattern recognition, coupled with your unique input and an increasingly complex set of rules and self-modification, create what we think of as AI—something complicated enough that you don’t know how it works, but, you know what? It can react (and sometimes “think”) a bit like you and me.
But really: what if you could “raise” an AI?
Just like we raise our kids today, what if you could purchase a generic template for an AI & send it off to Prep School, where it would be “trained” for your specific purpose, given a steady stream of real time input (probably from the tracking around you) to better suit whatever you need it to do?
It’d be taught by professors who specialize in Machine Learning (a real field, mind you), and given a stream of constantly generated real-time scenarios to work through. Your AI could be “interned” out to employers who can tolerate a margin of error, to work out various kinks.
Before graduation, it’d be given the robot SAT: the first part would be a modified Voight-Kampff test (of Blade Runner fame) testing their ability to empathize & relate to people. Then, they’d take versions of the Turing Test (proposed by mathematician Alan Turing) to check the ability of the AI to pass as “human,” to communicate and understand effectively, as opposed to dumb programming (call it the writing portion). Last, they’d take a randomly-generated but purpose-specific Asimov Exam (formulated by writer Isaac Asimov in I, Robot), where they’d have to successfully pass a series of scenarios while obeying the three basic rules of AIs—that in whatever they do, they they may not harm a human being, must obey orders, and must have a sense of self preservation that does not conflict with the first commandments.
Do you need an android lover, a police detective, or a personal assistant? Are you looking for a very specific cool-hunter algorithm, a business partner, or a swarm of nanobot gardeners? Each could be trained for your specific purpose, saving you the time of customizing it and growing it yourself.
Their learning (the input) would obviously affect their performances, and they’d be graded on it. The Matrix, for example, may get a 100% on the Turing test, but fail miserably on the Asimov exam—back to the circuit boards with that one, then. Perhaps, catching on to the trend, other Machine Learning schools open. You’ve got competition now, with trade schools, the Ivy-Leagues and fly-by-night adware schools of the AI generation. Research labs open, pushing the boundaries of what can be done with emergent AIs, how to teach them, how they work… New AIs are bred, created, raised, trained, modified, then either turned off or—legal definitions pending—forced to be released, sold, re-purposed. At some point, AIs take on the very human role as an oppressed consciousnesses; we rewrite laws, have new biblical and evolutionary controversies, augment ourselves, and eventually change the idea of what it means to be conscious and human.
(Of course, this scenario assumes we chose to limit the networking and coordination abilities of our good friends. This is because we want to limit the fallout from an eventual malfunction or uprising. We also want to eliminate the extraneous data noise from large networks to avoid slowing down the AI operating system and to prevent a potentially homogeneous override of your program by the input of other like-functioning programs.)
These AIs could very well be traded, bought, and sold much like the human employees of today. If a company is going out of business, you might want to auction their specialized AI, like companies are absorbed today for their secret APIs or ingredients. Maybe once you no longer need your unique AI, you’ll like to pass it down to your kids too. This is all possible. (To say nothing of all the ethical issues this raises…)
This future is not very far away:
I already get a steady stream of emails from apps reminding me to do things. Google search preempts my browsing habits if I give it half a chance. Weavrs already function as trainable AI-like entities who are online always, acting on their own, liking, reading, taking pictures, documenting an experience imaginary to us. Robot dogs are already on the market from various companies, though not as good as real ones. Asian countries are in constant competition to create the must humanoid robot. Ubiquitous tracking, as Max has written about previously, already watches me and tries to be my own, albeit money-minded, Cool Hunter. Most technology writers have already made the argument, now if not 50 years ago, that we “are” our technology. Even Siri is out there, growing, learning, trying to become the ultimate personal and personalized assistant. But all of those, like kids, take so much time to train into your perfect little AI servant… what if that could be bypassed? Outsourced?
What doesn’t exist yet it the business model for an AI school/training program. But I think that potential future is closer than we think. I mean, aren’t you already bored with those bland corporate AIs and products that don’t fit your specific needs? And isn’t it really true, that you don’t have time/energy in your life to create that ideal child/offspring/project?
I read recently in the Harvard Business Review that if you can’t find the perfect candidate for the job, you should invest in making them. This is it. What if these AIs may be, one day, the children that you could never have (but not for lack of trying)?
Would you adopt a robot? Could you fall in love with one? Would you trust it’s judgement? Would you be afriad to let it make its own meaningful choices? Would you give it a fabricator and let it run wild?
Coming soon: The R2-D2 Preparatory Academy for Hi-RAM, Low-Carbon-Footprint Emerging Artificial Intelligences, courtesy of What Are These Ideas, MIT, Richard Bronson, and Google.Read More
Thalience; the successor to Science?
What if you could separate the activity of science from the human researchers who conduct it? Automate it, in fact? Imagine creating a bot that does physics experiments and builds an internal model of the world based on those experiments. It could start out as something simple that stacked blocks and knocked them over again. Later models could get quite sophisticated; and let’s say we combine this ability with the technology of self-reproducing machines (von Neumann machines). Seed the moon with our pocket-protector-brandishing AIs and let them go nuts. Let them share their findings and refine their models.
So far so good. Here’s the question that leads to the notion of thalience: if they were allowed to freely invent their own semantics, would their physical model of the universe end up resembling ours? –I don’t mean would it produce the same results given the same inputs, because it would. But would it be a humanly-accessible theory?
In Ventus, of course, the thalient system has lost the ability to communicate with humans; but the end of the novel holds out the hope that some sort of bridge can be constructed. Strangely, this bridge appears in the form of politics, rather than as a meeting of minds through Reason or Mathematics.
So writes Karl Schroeder, elaborating on the concept of “Thalience” from his novel, Ventus. Interesting spur to the imagination, no doubt, but I take a few issues with this idea.
First off, there’s a big difference between science and discovery. The scientific method is exactly that: a method, and not an end in itself. Meanwhile, scientific discovery is based on figuring out solutions to problems, mapping cause and effect chains. The two are combined ingeniously in the cover word science, but are not the same.
As an example: Google makes a self-driving off-road car. Does it then start driving itself around & picking up groceries or going on road trips? No. It only starts doing that when given a command to do so; problem to be solved is get to destination ‘A‘ in shortest time and/or space. Along the way, the self-driving car will no doubt encounter obstacles, traffic, turns, and more. At this point, complex decision trees call upon the car’s infrastructure, latent networks, and preprogrammed instructions to resolve those problems. Traffic jam?– activate proximity sensors. Flood warning?– avoid route. Flat tire?– report to nearest “auto mechanic” as shown on web geo search. Running late?– prioritize objective of “shortest space’ instead. And so on.
The car doesn’t exactly break ground here, but a combination of various instructions and obstacles can lead to some novel results, such as when a self-charging robot, running low on battery, encounters a person it is supposed to serve blocking a socket. Asimov, your three laws are calling.
I’m sure there’s an argument somewhere that a properly coded directive of “understand laws of the universe” is possible, and an autonomous network of robots can go around testing causal chains using the scientific method, but scientific inquiry often leads to dead ends that require creative leaps or speculation to even know what to test in the first place.
At which point, then, would some version of “Occam’s Razor” come into play for these autonomous machines (“this explanation works & we need to explore no further”)?– I predict around the same point as when those machines allow for static trend speculation (“this repeatedly happens, so therefore it will be a causal law”). Or, at which point would a machine know that we need to magnify an object to find its cellular composition, if that cellular composition isn’t important to the subject at hand? Can a machine be programmed for the capacity of learning new things outside of a direction driven algorithm?– of taking happy accidents & spinning new models, instead of discounting those initial ‘failure’ results as irrelevant to the question at hand.
That’s asking, would we have figured out quantum mechanics via trying to figure out relativity? I’m not informed enough to answer, but seems like a long shot (improbable, not impossible). Moreover, we need new tools to even get to those scales, and how likely would it be for even a complex network to create the required tools, things both like microscopes and atomic colliders? Even were super complex autonomous networked self-reproducing machines capable of such creative leaps, how likely would they be to actually create them, given physical constraints? Assuming any sort of resource scarcity, would those machines also be capable of developing the most economic and sustainable methods of resource and project allocation to offset the nesseccary politics of scarcity?
I’d be fascinated to see all of this happen, and I’m not being cynical or sarcastic. But if you’re still with me throughout all of this hypothetical mumbo-jumbo, then I’ve got a more interesting question:
At which point would an automated science attempt to resolve confounding factors (especially as the scale of questions/problems gets bigger) & get itself into social sciences? At which point would a machine need to account for the “human element,” and find itself attempting to account for the more complex and dynamic social realm? And if it does, is that an implication that the social sciences can be automated & predicated & modeled in the same way?
To me, that’s the more far-reaching question, and I find it interesting that Schroeder accounts for the basic translation problem between those automated machines and humanity as a political solution. There’s a kitchen sink of implications there, considering this isn’t just a guy who writes sci-fi, but also a consultant called upon by the Canadian armed forced to craft potential future scenarios for planning & has a Master’s degree in strategic foresight, assuming that means anything. In a sense, it’s a shadow of that all cooperation and communication is political, which most readers would find hard to disagree with.
And the last question left, and perhaps the most paradoxical one, is that if we are able to create such an autonomous network of multifaceted machines able to problem-solve and reproduce, aren’t we essentially recreating life as we know it? The genome is that sort of open-ended code for reproduction and evolution, and machine learning is modeled very much on the neural networks in our heads, and all of this is brought full circle in the computational theory of mind. To say nothing of tool use, inter-specie interaction, and resource scarcity of the natural world. Schroeder’s hypothetical thalience is nothing more than an artificial god-game then, a clever but ultimately meaningless plot-twist. Sure, you can automate science, but the complex environment within which it will happen, dictated by all sorts of resource scarcity & allocation between creative pursuits is already here. Just pretend we’re machines.
Paradox? Nah. But maybe we should be looking at machine- or automated-politics instead. How do you rationally decide and allocate actions between competing ideologies, when two things are hypothetically simultaneously equally right and equally wrong? Maybe we’d be able to predict of the results of the Romney-Obama election then, more political science than politics.
& at the risk of going on for way too long & overstepping my usefulness, I want to throw out this last idea, as a final coda to all of our talk about science & understanding the world, and an open call for discussion:
I once overheard the historian and radical pedagogue Hayden White say something that resembled the following statement: ‘The misunderstanding is that science is considered more precise than poetry.’ White’s claim encourages that the world be perceived outside that which is solely measurable. While an inch remains an inch yesterday, today, and tomorrow, a poem capture how the poet feels at a precise moment in time. Following from this logic, the measurement of the world inch by inch tells us little about the myriad poetic precisions that make up the world we inhabit. (Yasushi Tanaka-Gutiez, Shoppinghour Magazine #8)
Machine poetics & artificial god-games indeed.Read More
John Agresto of the Wall Street Journal wrote a really provoking piece recently: “Robin Hoods Don’t Smash Shop Windows,” arguing that the left’s concern for the poor, as well as all that fairness and decency rhetoric, unravels rather quickly in practice, and that the ideological left seems to regularly produce a nihilistic fringe and mass protests that degenerate into riots and violence.
Agresto calls the ideological left’s convictions a myth because, historically, the goal of equality under the guise of social benefit has been forwarded by the heavy hand of violence. Mao’s cultural revolution, the French revolution, protests currently happening in Greece and the rest of Europe, and the Occupy crowd consistently show that equality is a revenge story (or a justice story, depending on who is writing history). Is it not the conservatives, with their calls for individual liberty, self-reliance, and hard work, that are the truly virtuous ones, in light of the left’s constant violent equalization in the name of greater good? As conservatives go about trying to make their living, the left fights for the entitlements of equality, always being robbed by someone or not getting their justice, directing their misfortunes and anger outwards, instead of looking inwards and trying harder.
Well. Now that we’re comparing the Occupy crowd to Stalinists and filing all that under the label “Democrat”, I’m sure we can have a reasonable discussion. The road to hell is paved with biased observations, Mr. Agresto.
From the get-go, Agresto builds a straw man for his own argument. You simply can’t compare the fringe elements of one side with the moderate elements of another. This is often a bias everyone makes: what happens when we compare the Tea Party, Donald Trump, the Mafia States of Africa, and the Westboro Baptist Church to the moderate democrats? – we’d simply be demonizing the opposition.
But more than that, the right has just as many turns to violence as the left. The right has just as many fringe groups outside of the mainstream conversation: the hyper-religious gay-bashing and anti-abortionist social conservatives who ostracize and throw rocks at anyone going to the wrong denomination of church are just as bad as the occupiers who ask for jobs and better fiscal regulation while throwing stones at the police.
And if we’re talking legitimate violence, let’s not forget all of violence of the hyper-right: we’ve got Nazism and Fascism historically, and more recently, the gunman of Arizona who shot a democratic senator and a few other people, or the Norwegian guy who shot up an entire camp of left-in-training kids. The differences here are in scale, perhaps: conservative violence, true to its roots in individual values, arrives as instances of individuals lashing out against others in defense of conservative values, and traditions.
But of course, it’s a lot easier to distance a group or ideology from one individual acting out, than from a huge movement gone wrong. A lone gunman? Emphasis on lone. But perhaps it is not that the left’s goals are to incite violence so much as that violence is a by-product of local interactions. Consider: true to thier values, the left advocates for strength in numbers, and tries to work together, resulting in both democratic governments as well as protests and revolutions (as opposed to trends of the right in fascism and coups). But when large groups of like-goaled people gather, you get a herd mentality and an implicit pressure from the momentum of the group.
As in Romania, where Nicolas Ceausescu was overthrown at a pro-Ceausescu rally by one man shouting “Down with Ceausescu” and the rest unthinkingly copying him, so can any peaceful gathering be very quickly be overtaken by a just-as-fringe radical leftist. However, as large group gathering are also always watched by groups of police to contain such fringe elements from exploding, the police presense generally backfires: a heavy-handed response against even one member of the group results in a ripple of back and forth violence that ends in a riot.
Of course, this is more likely to happen on the left, because the right doesn’t want to believe in large groups and movements. The right is pro-individual action, after all. To equate the left with revenge stories based on riots like that would be necessitate equating the right with a predatory vulturism and a complete lack of empathy for others, ala Ayn Rand and her heroes. Both sides have their extreme elements; both sides have very respectable moderate and centrist elements as well.
Perhaps a word needs to be said about how people perceive their futures as well. It’s been well noted that voting patterns are based on how the voter will see their futures, and America is the playground of unbridled optimism. That means the rhetoric of the right, that of individual success through hard work, resonates much more with the ‘working man’ than that of attempting to create equality of some sort. To create a rhetoric of achieving equality is to imply that things are so unequal that it will not be resolved by individuals. Calls for equality are pessimistic and defeatist: they are calls for help, and often calls by minorities for action, which quickly devolves into what looks like the minority imposing its views on the majority and vice-versa, or the paradox of tolerance (can you tolerate calls for intolerance?).
Violence from minority groups will come when they have no say in anything and think they can only be heard through some sort of action, while from the other side, that all looks like a bunch of appeasement and extremism. Radical Islam, anyone? The militant minority? A group unrepresented will have less options for communicating, and violence becomes a highly effective way to deliver a message.
But there is no balance that can be struck between minorities and majorities (or states rights vs. federal government) simply because there is a gap of understanding, empathy, and values. A natural balance will be found by bursts from one side to another as novelties codify into conventions. Things that were unthinkable in the past will become normal in the future, but the present will be marked by a strong strife of ideas and values. Because, of course, the present is our battlefield for the future. The violence is not a domain of the left, but the domain of the unheard or disagreed-with, regardless of political leanings.
This is all an interesting argument and John Agresto could have made a strong statement, but I guess one can’t hold editorialists to complicated views. And that’s a pity, because he makes some good points: we can’t conflate entitlement with legal and social equality. But we also shouldn’t conflate meritocracy with capitalism, nor the radical elements of an ideology with its moderate counterpoints. Every idea taken to its logical conclusion is a form of extremism. That’s why we can’t always be right. Sometimes, we have to be wrong– or in this case– left.
And P.S. — Robin Hood did smash windows and rob the rich blind. Just sayin’.Read More
Web2.0 is a business model – harnessing and commodifying participatory culture. #digitaltrans
— narayan (@narayan_) May 15, 2012
Politics is the ultimate culmination of ‘meaningful participatory culture’. But, as we all know, politics today is a broken system. Is there any possibility to fix it, to reinvent it somehow? Is there a possibility of harnessing today’s technology to create meaningful participatory politics, or is technology ultimately the problem?
Germany weighs in on the question: New York Times columnist David Brooks writes about the German “Pirate” party, which has created a new way of engaging the political system online:
Using a software package they call Liquid Feedback, the Pirates are able to create a continuous, real-time political forum in which every member has equal input on party decisions, 24 hours a day. It’s more than just a gimmicky Web forum, though: complex algorithms track member input and generate instantaneous collective decisions.
Of course, on some level Liquid Feedback is a gimmick, an effort to get young people interested and involved in the humdrum of German politics, outside the campaign season and even off line. Whatever it is, it works: late last month some 1,300 members trekked to the small northern city of Neumünster to elect a new executive board.
But on the other hand, Gregory Ferenstein in TechCrunch argues the exact opposite: technology ultimately undermined the desire for democratic participation:
Democracy used to be a part of everyday American life. Frequent carnivals and parades would accompany political debates, as citizen-revelers would schmooze with local politicians, to discuss issues that they had direct control over. As a result, Americans were not only incredibly engaged, but well-read: a higher proportion of people read Thomas Paine’s political philosophy than watch the Superbowl today. They also patiently listened to presidential debates that last 6 or more hours at a time.
Then, technology crashed the party: “By the 1920′s, radio broadcasts had replaced mass meetings and all-day orations,” writes Kornbluh. “As the role of voters became increasingly passive, it is little wonder that their enthusiasm for electoral politics waned.” Political parties had no incentive to subsidize the good times, given the more efficient ways of mass communication at their disposal.
Ultimately, the motivation to vote has to overcome one very big problem: voting is irrational, since no one person can make a difference. No democracy in history has ever sustained high levels of engagement on the hope that citizens are willing to sacrifice their free time to make a marginal difference. The Gilded Age party machines overcame this dilemma by intermixing politics with fun (albeit in often unethical ways).
While “democracy” has ultimately halted in America in favour of political machines, I think the general American sentiment is that things shouldn’t be this way. Can we harness the power of technology, especially today’s Web 2.0 participatory culture, to change politics?
So here’s the debate: how do we go about fixing our political system? Is it technology versus analog participation? Is it about city-states versus federal governing, the problem of scaling? Or does fixing politics involve undermining politics altogether?
Weigh in below. And like in a functioning democracy, your voice makes this debate count.Read More