Thoughtful. I can relate to a lot of what you’ve written here. What are your thoughts on the rich getting richer? I really worry about wealth inequality.
I haven’t thought about this aspect of AI nearly as much, just because I spend a lot of time thinking about writing code and very little time thinking about macroeconomics. But I think you’re probably right to be worried, especially with the hype explosion caused by ChatGPT. I think ChatGPT is nigh-useless in a real practical sense, but it may work very well for the purpose of making OpenAI money. They’ve announced partnerships with huge consulting firms. It’s easy to imagine it as this engine of making money for corporations based mostly on its perceived amazing capabilities, without actually improving the world in any way. Consulting is already a cesspool of huge sums of capital that accomplishes nothing (I work in consulting). ChatGPT is perfect for accelerating that. It’s also worth pointing out how these latest models like GPT require ungodly amounts of data and compute—you have to be obscenely rich to create them.
I think ChatGPT is nigh-useless in a real practical sense
That’s a very strong take. If you want something to write a short story or essay for you it’s mad useful. It also can take a fast food order pretty reliably off the shelf.
I’m a programmer, ChatGPT is incredibly useful to me now. I use it like a search engine that is able to pull up the perfect example I can use to get me over hurdles. Even when it’s wrong it’s still useful like having a human tutor, that can also be wrong.
Ultimately, I think this type of AI will have a similar effect as Google.
Not OP, but it’s a real concern, although at this rate of progression I wonder if AI ethics will have much of a splash before AI alignment becomes the question.
Thoughtful. I can relate to a lot of what you’ve written here. What are your thoughts on the rich getting richer? I really worry about wealth inequality.
I haven’t thought about this aspect of AI nearly as much, just because I spend a lot of time thinking about writing code and very little time thinking about macroeconomics. But I think you’re probably right to be worried, especially with the hype explosion caused by ChatGPT. I think ChatGPT is nigh-useless in a real practical sense, but it may work very well for the purpose of making OpenAI money. They’ve announced partnerships with huge consulting firms. It’s easy to imagine it as this engine of making money for corporations based mostly on its perceived amazing capabilities, without actually improving the world in any way. Consulting is already a cesspool of huge sums of capital that accomplishes nothing (I work in consulting). ChatGPT is perfect for accelerating that. It’s also worth pointing out how these latest models like GPT require ungodly amounts of data and compute—you have to be obscenely rich to create them.
That’s a very strong take. If you want something to write a short story or essay for you it’s mad useful. It also can take a fast food order pretty reliably off the shelf.
I’m a programmer, ChatGPT is incredibly useful to me now. I use it like a search engine that is able to pull up the perfect example I can use to get me over hurdles. Even when it’s wrong it’s still useful like having a human tutor, that can also be wrong.
Ultimately, I think this type of AI will have a similar effect as Google.
Not OP, but it’s a real concern, although at this rate of progression I wonder if AI ethics will have much of a splash before AI alignment becomes the question.