As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
Immediate things that come to mind are bots on Reddit. Twitter is 70% bot traffic. People interact with them all day every day and don’t know.
That quickly spirals into customer service. If you’re not talking to a guy with a thick Indian accent, could be a bot at this point.
A lot of professional business services are exploring AI hard…what happens when one tells the business to do something monumentally stupid and said business does it? Is it the people who are training the AI? Is the machine at fault for a hallucination? Is it the poor schmuck at the bottom that pushed the delete button?
It’s not cut and dry when you’re interacting with a machine any more.
Immediate things that come to mind are bots on Reddit. Twitter is 70% bot traffic. People interact with them all day every day and don’t know.
That quickly spirals into customer service. If you’re not talking to a guy with a thick Indian accent, could be a bot at this point.
A lot of professional business services are exploring AI hard…what happens when one tells the business to do something monumentally stupid and said business does it? Is it the people who are training the AI? Is the machine at fault for a hallucination? Is it the poor schmuck at the bottom that pushed the delete button?
It’s not cut and dry when you’re interacting with a machine any more.