<p>ChatGPT, an AI bot who can have conversations on literally anything under the sun (or, as it turns out, beyond the sun too), became a sensation as soon as it was launched. It can debate you on Plato's organic society, solve complex mathematical problems, pen down a research essay, a wistful poem for a lover, or a Sonnet on electric cars. Yes, it's quite an invention. But it is not the only one of its kind. Many other companies have developed their own versions of such bots, capable of performing a host of functions.</p>.<p>These include DALL-E 2 (the system lets you create digital images simply by describing what you want to see) and GPT-3 (a natural-language system that can write, argue and code, with brilliant fluency). One of the most interesting is Character A.I., which creates bots that impersonate characters – historical, cultural, and even religious figures. Yes, you can also speak to Jesus. And yes, he speaks a lot in psalms.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/business/technology/google-accelerator-invites-new-local-startups-in-india-1203136.html" target="_blank">Google Accelerator invites new local startups in India</a></strong></p>.<p>There are multiple models of the same character, and some of them are chillingly on point. Consider, for example, Jonas Thiel’s conversation with a bot of Karl Kautsky, a Czech Marxist. Not only did Thiel find some very true to tone answers from the bot, he realised that the bot was excellent in converting what were complex theoretical arguments into simple language without depriving them of their meaning. Moreover, just like ChatGPT, the bot can be asked to regenerate a response if it feels a little off – allowing it to use these inputs as a means of discerning suitable answers.</p>.<p>The engine that equips these bots to impersonate a character to such minute detail is the internet. These bots are coded in a way that their database is the entire length and breadth of the internet. They can talk about everything, because the internet today knows everything. This is not so surprising. After all, isn't that what Siri does on your iPhone or Google Voice Assistant does on an Android smartphone? Yet, this is different.</p>.<p>It's smarter. It has learned from reams of general dialogue as well as from articles, news stories, books and other digital text describing people like Elon Musk, Oscar Wilde and Winston Churchill. We are seeing a significant shift here. AI has moved from regurgitative capability to <span class="italic"><em>creative capability</em></span>. If it can write a poem for a sweetheart, that means it has the intelligence to not only analyse rhyming patterns from pre-existing poetry but also string it together cogently and effectively, in one command.</p>.<p><strong>How far does this go? If today it is stories, essays, and creative writing, what stops it from covering normative theory tomorrow? </strong></p>.<p>Consider Marx, for example. He spoke about the withering away of the State post-Revolution, which would lead to a classless society. There is no concrete sense of what structure of authority happens post the dictatorship of the Proletariat, some argue. How will the State wither away and the proletariat mediate the transition from a dictatorship to a Stateless society? These were some of the more realistic lacunae that plagued Marxist theory. What if someone designs a bot that solves these problems? What if it is able to come up with answers that make sense; answers that can actually be institutionalised? How will that affect the very act of human theoretical enterprise?</p>.<p>One can argue that it will not be able to theorise on emotions, values, ideals and aspirations, which are very human. Frankly, I am not so sure. Look at the ad-targeting mechanism we already have. How many times have we seen ads on Google for precisely what we need! Netflix series<span class="italic"> The Social Dilemma</span> shows how big tech has high-capacity computers that have tailored models of each one of us based on our digital footprint – which are used to predict what we will do in a series of different contingencies. Imagine it this way -- a machine sitting in Facebook’s basement knows you quite intimately; your interests, what you read, what you eat, drink, order, wear, search for, and even how you think. It uses that information to construct a paradigm in its code, which is further used to target advertisements at you.</p>.<p>If models of this magnitude can indeed be created simply by digital footprint, what is to prevent it from being combined with the code underlying an AI bot, and used to theorise solutions to problems that confront the world? Clearly, AI is able to acquire fluency in impersonation simply by drawing on the written material on the internet. The spectre of social media advertising shows how AI is also able to contingently predict human behaviour. If it is able to understand human nature thus, I doubt that it is very far from having, at the very least, a nascent capability to theorise on entirely human problems.</p>.<p>We have an illustrious Pratap Bhanu Mehta. Would a Political-Theory Bot Meta be a threat? Let us not get too carried away by this double PBM insinuation, though. No one is suggesting that academics must now run to save their jobs, or philosophers should consider wage labour. Naturally, the foremost question that will emerge in this context is that of ethics. Issues of normative conjecture consist of a component of ethical manoeuvre. How can a bot be trusted to be ethical? If a being is not social, how can it be ethical? Then, there is the threat of a reflection of human prejudices in AI – chatbots have been known to make things up. Researchers call this generation of falsehoods ‘hallucination’. If AI may be capable of reaching its zenith on the basis of a human sample size, it will also be capable of stooping to the pits of human prejudice.</p>.<p>Lastly, any realistic appraisal of this projection must account for how these bots can only respond to inputs. Unless we type in our grouse, no AI bot can know what theoretical interrogation to undertake. More importantly, without a code that lends a bot to undertaking theoretical analysis, we will not have a bot. Therefore, even if we do have a bot for theory, human creative potential won’t be thwarted, but redirected. Theorists will have to focus their attention on the kind of prompts they will give such a software, and then use the output as a basis for their theoretical investigations. Instituting this synergy between human and Artificial Intelligence could well become a promising template to follow in the social sciences. Who knows, the two PBMs may just work well together!</p>.<p><span class="italic"><em>(The writer is a student of political science at Kirori Mal College, Delhi University)</em></span></p>
<p>ChatGPT, an AI bot who can have conversations on literally anything under the sun (or, as it turns out, beyond the sun too), became a sensation as soon as it was launched. It can debate you on Plato's organic society, solve complex mathematical problems, pen down a research essay, a wistful poem for a lover, or a Sonnet on electric cars. Yes, it's quite an invention. But it is not the only one of its kind. Many other companies have developed their own versions of such bots, capable of performing a host of functions.</p>.<p>These include DALL-E 2 (the system lets you create digital images simply by describing what you want to see) and GPT-3 (a natural-language system that can write, argue and code, with brilliant fluency). One of the most interesting is Character A.I., which creates bots that impersonate characters – historical, cultural, and even religious figures. Yes, you can also speak to Jesus. And yes, he speaks a lot in psalms.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/business/technology/google-accelerator-invites-new-local-startups-in-india-1203136.html" target="_blank">Google Accelerator invites new local startups in India</a></strong></p>.<p>There are multiple models of the same character, and some of them are chillingly on point. Consider, for example, Jonas Thiel’s conversation with a bot of Karl Kautsky, a Czech Marxist. Not only did Thiel find some very true to tone answers from the bot, he realised that the bot was excellent in converting what were complex theoretical arguments into simple language without depriving them of their meaning. Moreover, just like ChatGPT, the bot can be asked to regenerate a response if it feels a little off – allowing it to use these inputs as a means of discerning suitable answers.</p>.<p>The engine that equips these bots to impersonate a character to such minute detail is the internet. These bots are coded in a way that their database is the entire length and breadth of the internet. They can talk about everything, because the internet today knows everything. This is not so surprising. After all, isn't that what Siri does on your iPhone or Google Voice Assistant does on an Android smartphone? Yet, this is different.</p>.<p>It's smarter. It has learned from reams of general dialogue as well as from articles, news stories, books and other digital text describing people like Elon Musk, Oscar Wilde and Winston Churchill. We are seeing a significant shift here. AI has moved from regurgitative capability to <span class="italic"><em>creative capability</em></span>. If it can write a poem for a sweetheart, that means it has the intelligence to not only analyse rhyming patterns from pre-existing poetry but also string it together cogently and effectively, in one command.</p>.<p><strong>How far does this go? If today it is stories, essays, and creative writing, what stops it from covering normative theory tomorrow? </strong></p>.<p>Consider Marx, for example. He spoke about the withering away of the State post-Revolution, which would lead to a classless society. There is no concrete sense of what structure of authority happens post the dictatorship of the Proletariat, some argue. How will the State wither away and the proletariat mediate the transition from a dictatorship to a Stateless society? These were some of the more realistic lacunae that plagued Marxist theory. What if someone designs a bot that solves these problems? What if it is able to come up with answers that make sense; answers that can actually be institutionalised? How will that affect the very act of human theoretical enterprise?</p>.<p>One can argue that it will not be able to theorise on emotions, values, ideals and aspirations, which are very human. Frankly, I am not so sure. Look at the ad-targeting mechanism we already have. How many times have we seen ads on Google for precisely what we need! Netflix series<span class="italic"> The Social Dilemma</span> shows how big tech has high-capacity computers that have tailored models of each one of us based on our digital footprint – which are used to predict what we will do in a series of different contingencies. Imagine it this way -- a machine sitting in Facebook’s basement knows you quite intimately; your interests, what you read, what you eat, drink, order, wear, search for, and even how you think. It uses that information to construct a paradigm in its code, which is further used to target advertisements at you.</p>.<p>If models of this magnitude can indeed be created simply by digital footprint, what is to prevent it from being combined with the code underlying an AI bot, and used to theorise solutions to problems that confront the world? Clearly, AI is able to acquire fluency in impersonation simply by drawing on the written material on the internet. The spectre of social media advertising shows how AI is also able to contingently predict human behaviour. If it is able to understand human nature thus, I doubt that it is very far from having, at the very least, a nascent capability to theorise on entirely human problems.</p>.<p>We have an illustrious Pratap Bhanu Mehta. Would a Political-Theory Bot Meta be a threat? Let us not get too carried away by this double PBM insinuation, though. No one is suggesting that academics must now run to save their jobs, or philosophers should consider wage labour. Naturally, the foremost question that will emerge in this context is that of ethics. Issues of normative conjecture consist of a component of ethical manoeuvre. How can a bot be trusted to be ethical? If a being is not social, how can it be ethical? Then, there is the threat of a reflection of human prejudices in AI – chatbots have been known to make things up. Researchers call this generation of falsehoods ‘hallucination’. If AI may be capable of reaching its zenith on the basis of a human sample size, it will also be capable of stooping to the pits of human prejudice.</p>.<p>Lastly, any realistic appraisal of this projection must account for how these bots can only respond to inputs. Unless we type in our grouse, no AI bot can know what theoretical interrogation to undertake. More importantly, without a code that lends a bot to undertaking theoretical analysis, we will not have a bot. Therefore, even if we do have a bot for theory, human creative potential won’t be thwarted, but redirected. Theorists will have to focus their attention on the kind of prompts they will give such a software, and then use the output as a basis for their theoretical investigations. Instituting this synergy between human and Artificial Intelligence could well become a promising template to follow in the social sciences. Who knows, the two PBMs may just work well together!</p>.<p><span class="italic"><em>(The writer is a student of political science at Kirori Mal College, Delhi University)</em></span></p>