Text to speech has been a technology for a very long time. This is, in my opinion, a whole article about nothing, leaning on the AI label to garner views.
Yes, we may ask the question whether or not speculative uses of AI in other manners have negative implications, and these should be asked, but that isn't the case here.
It is very much like asking the question if cars, upon inve tipn, started driving into random fields with no restraint, off-roading as if any car owner woulddo this, upon the sight of seeing a new motor carriage driving down a street. Important questions to ask of emergent technology, sure, but right now that motor carriage is on the road, let it be.
I read the article as him writing the text himself and using AI just for turning it into audio. Which is evidently frowned upon, but to me doesn't constitute AI taking over policy making.
That's not at all what the article says is happening.
"Immigration Judge John P. Burns has been using artificial intelligence *to generate audio recordings of his courtroom decisions* at the New York Broadway Immigration Court, according to internal Executive Office for Immigration Review (EOIR) records obtained by Migrant Insider." [Emphasis added]
It's the eagerness with which the high offices let AI to enter into the courtroom affairs which are considered highly sanctimonious. If the trend continues, judgements might be delivered by AI, and you would be saying why can't we let a more intelligent system take over the role of human judges and policy makers. That's the grievance.
So if he's writing the decisions himself and using an AI voice to read them, big deal. It's pretty much a nothingburger, unless the AI voice somehow misread something in a legally relevant way. If he's using AI to generate decision text, that's a more serious issue.
Generative text to speech models can hallucinate and produce words that are not in the original text. It's not always consequential, but a court setting is absolutely the sort of place where those subtle differences could be impactful.
Lawyers dealing with gen-AI TTS rulings should compare what was spoken compared to what was in the written order to make sure there aren't any meaningful discrepancies.
While not as bad as AI rendering the decision itself obviously, I wouldn't exactly say it's a nothing burger. It feels completely inauthentic and dystopian.
I can only imagine the hell of being nervous in a big court case waiting for the decision, and hearing that annoying TikTok lady deliver the bad news.
Text to speech has been a technology for a very long time. This is, in my opinion, a whole article about nothing, leaning on the AI label to garner views.
Yes, we may ask the question whether or not speculative uses of AI in other manners have negative implications, and these should be asked, but that isn't the case here.
It is very much like asking the question if cars, upon inve tipn, started driving into random fields with no restraint, off-roading as if any car owner woulddo this, upon the sight of seeing a new motor carriage driving down a street. Important questions to ask of emergent technology, sure, but right now that motor carriage is on the road, let it be.
This feels like a daily mail article for a slightly different audience. Is this what's now referred to as "rage baiting"?
Is this a real judge, or is an "Immigration Judge" one of those not-actually-a-judge decisionmakers employed by the executive?
The latter. They're not even real administrative law judges.
Alright, this is hitting the fan now. If AI takes over judiciary and policy making, then we are officially in the kingdom of AI.
I read the article as him writing the text himself and using AI just for turning it into audio. Which is evidently frowned upon, but to me doesn't constitute AI taking over policy making.
That's not at all what the article says is happening.
"Immigration Judge John P. Burns has been using artificial intelligence *to generate audio recordings of his courtroom decisions* at the New York Broadway Immigration Court, according to internal Executive Office for Immigration Review (EOIR) records obtained by Migrant Insider." [Emphasis added]
I am not convinced that you read the article. What specific action occurred in the narrative do you have a specific grievance with?
It's the eagerness with which the high offices let AI to enter into the courtroom affairs which are considered highly sanctimonious. If the trend continues, judgements might be delivered by AI, and you would be saying why can't we let a more intelligent system take over the role of human judges and policy makers. That's the grievance.
Seems like they didn't even read the headline...
So if he's writing the decisions himself and using an AI voice to read them, big deal. It's pretty much a nothingburger, unless the AI voice somehow misread something in a legally relevant way. If he's using AI to generate decision text, that's a more serious issue.
Generative text to speech models can hallucinate and produce words that are not in the original text. It's not always consequential, but a court setting is absolutely the sort of place where those subtle differences could be impactful.
Lawyers dealing with gen-AI TTS rulings should compare what was spoken compared to what was in the written order to make sure there aren't any meaningful discrepancies.
While not as bad as AI rendering the decision itself obviously, I wouldn't exactly say it's a nothing burger. It feels completely inauthentic and dystopian.
I can only imagine the hell of being nervous in a big court case waiting for the decision, and hearing that annoying TikTok lady deliver the bad news.