Yes. For example, when an AI-based system is hyped or deployed, very often it is touted as an improvement over whatever the previous (human-involving) system was, because supposedly it will “make objective decisions”, “without prejudices”, “based on cold data”, and so on.
Each instance of this needs to be called out, every time, because these decisions end up being anything but.
Yes. For example, when an AI-based system is hyped or deployed, very often it is touted as an improvement over whatever the previous (human-involving) system was, because supposedly it will “make objective decisions”, “without prejudices”, “based on cold data”, and so on.
Each instance of this needs to be called out, every time, because these decisions end up being anything but.
Oh yeah that’s a good point! Thank you