ADVERTISEMENT
New York City defends AI chatbot that advised entrepreneurs to break lawsMyCity chatbot was touted as the first city-wide use of such AI technology, something that would give business owners "actionable and trusted information" in response to queries typed into an online portal.
Reuters
Last Updated IST
<div class="paragraphs"><p>The bot programmed by RU Sanjay, a 24-year-old BPlanning student originally from Vellore, Tamil Nadu,&nbsp;generated nude pictures out of a sample image.&nbsp;</p><p></p></div>

The bot programmed by RU Sanjay, a 24-year-old BPlanning student originally from Vellore, Tamil Nadu, generated nude pictures out of a sample image. 

Credit: iStock Photo

ADVERTISEMENT

New york: New York City Mayor Eric Adams is defending the city's new artificial intelligence chatbot that has been caught in recent days giving business owners wrong answers or advice that, if followed, would entail breaking the law.

When launched as a pilot in October, the MyCity chatbot was touted as the first city-wide use of such AI technology, something that would give business owners "actionable and trusted information" in response to queries typed into an online portal.

That has not always proved the case: journalists at the investigative outlet The Markup first reported last week that the chatbot was getting things wrong. It wrongly advised that employers could take a cut of their workers' tips, and that there were no regulations requiring bosses give notice of employees' schedule changes.

"It's wrong in some areas, and we've got to fix it," Adams, a Democrat, told reporters on Tuesday, emphasizing that it was a pilot program. "Any time you use technology, you need to put it into the real environment to iron out the kinks."

Adams has been an ardent advocate for deploying untested technology in the city with an optimism that is not always vindicated. He put a 400-pound vaguely ovoid robot in the Times Square subway station last year that he hoped would help police deter crime; it was retired about five months later, with commuters noting that it never appeared to be doing anything, and that it could not use stairs.

The chatbot remained online on Thursday and was still sometimes giving wrong answers. It said store owners were free to go cashless, apparently oblivious to the city council's 2020 law banning stores from refusing to accept cash. It still thinks the city's minimum wage is $15 per hour, though it was raised to $16 as of 2024.

The chatbot, which relies on Microsoft's Azure AI service, appears to be led astray by problems common to so-called generative AI technology platforms such as ChatGPT, which are known to sometimes make things up or assert falsehoods with HAL-like confidence.

Microsoft declined to say what might be causing the problems, but said in a statement it was working with the city to fix them. The city's Office of Technology and Innovation said in a statement that "as soon as next week, we expect to significantly mitigate inaccurate answers."

Neither Microsoft nor City Hall responded to questions about what was causing the errors and how they might be fixed.

The city has updated disclaimers on the MyCity chatbot website, noting that "its responses may sometimes be inaccurate or incomplete" and telling businesses to "not use its responses as legal or professional advice."

Andrew Rigie, who advocates for thousands of restaurant-owners as the director of the NYC Hospitality Alliance, said he had heard from business owners perplexed by the chatbot's responses.

"I commend the city for trying to use AI to help businesses, but it needs to work," he said, warning that following some of the chatbot's guidance could bring serious legal consequences. "If when I ask a question and then I have to go back to my lawyers to know whether or not the answer is correct, it defeats the purpose."

ADVERTISEMENT
(Published 05 April 2024, 09:04 IST)