Productivity is up and real wages are down, but humans are still in the game.
This is a very interesting article. I think I need more context on the idea that there's downward wage pressure based on the description in the article. If you add 19% more jobs at below median wage then you can lower the statistical median without incumbents having been affected. I guess it depends on whether they're just adding low quality work that wouldn't have existed before as mentioned in the article or whether this new low wage work replaces the high wage work over the long run.
This is a great guest post! I know some of the initial guest posts got a rocky reception because they were lower quality hot takes, but I would love to see more of this kind of analytic post.
AI kind of scares the shit out of me. It can already do so many things and it is as "bad" as it will ever be.
Working on the Yang campaign AI was cited as the motivation behind the UBI idea. I was always a bit skeptical of this because the $12,000 amount wasn't really a job replacement number as much as a safety net or caretaker's spending money number.
It does seem like all but the most AI-enthusiastic have become a little bit down of the upsides of AI and how much having powerful creative computer tools can mean for the average person. This has become a flashpoint in the WGA and SAG contracts because people fear being replaced by AI writers and actors but it seems clear the world will be richer culturally if we have endless stories to build off of and if any creator can make a film without having to hire the full crew of actors and crew that it currently takes. The recent Spider-Man movie had a sequence from a young kid who kid some fan animation and that kid wouldn't really be able to do that in live action at the moment.
All that said, we should really think about what we are going to do when there are either less jobs or when people are forced to switch jobs/industries more often because AI takes over. The left's focus on "workers" sometimes means that, at least rhetorically, we are focused on protecting jobs and work rather than just the people that would do those jobs.
Bernie's campaign website said "Anyone who works 40 hours a week in America should not be living in poverty." I would wager that he doesn't think people who don't work 40 hours a week should be living in poverty either. There are a ton of reasons beyond money people hate losing their jobs because there is often a sense of purpose and community that people get from work as well.
If we want people to embrace AI we really need to work on better and faster ways for people to transition from job to job and industry to industry. This means both education and monetary support as well as lowering the barriers for some jobs. I would also be interested in people trying to innovate on the worker/job matching problem because most current job sites are kind of a nightmare to use.
The video game example demonstrates just how more serious makers have become at getting translations correct. It's a far cry from comparing the mistranslations of games I played as a kid in the 1980s and 1990s, a world in which phrases like "All your base are belong to us" were tolerated.
I just have this dreadful feeling that we're totally unprepared for what AI is going to do to us (or what we're going to do to each other because of AI). Maybe job loss & disruption won't be as bad as we fear, maybe it won't annihilate us. Nonetheless I think we'll soon be living in strange times.
We've barely got a grip on social media & smartphones. AI will be far more disruptive.
As someone who works overseas regularly, I can’t wait until AI real time translation happens.
But... I wouldn’t be optimistic for these translators. Every time they edit a machine translation, the AI is learning that much more. They think they are editing, but really they are just teaching.
Back to real time actual translation. It is going to make the dating game so much easier. Everyone’s potential partners will expand by billions.
Spot on, I can think of countless examples where this is true for me as a scientist. Need an old patent or paper translated from German or Japanese to get the relevant protocols? Google Translate does the job. But need a patent *written* in a different language? Pay the human.
We have a lot of conversations as scientists around what AI will mean for our jobs, because a lot of the issues here apply to other professions. Can we ever automate drug discovery end to end? Not a chance, there's too much nuance to the human body that even the computational models we have are weak approximations (and those don't account for anywhere close to every aspect of human biology). What about chemical synthesis? The planning software has gotten much better, but it can't yet account for all of the different stereoelectronic effects in your substrate and all of the side reactions you might see. Maybe it'll get closer, but there will always be the need for the human to optimize based on purification ease or yield or whatever.
On the analogy of Uber:
Uber is highly subsidized by investors and runs at a deep, deep loss. The prices they charged were never realistic.
LLMs like ChatGPT are also highly subsidized. The training is extremely expensive and even the per-query costs are quite high. Maybe the chips will eventually get cheap enough to break even on ads, but I doubt it.
This is the way I've been operating for some time. I'm quite fluent in Spanish but when I have a long or complicated letter or document I need to write, I draft it first in English, Google it into Spanish, and then revise.
Some issues with these two successive paragraphs:
"“If you put ‘trust’ in ChatGPT it's going to translate it to confianza,” Leon said. “But that's not what it means.” In reality, Leon says, there are 20 or 30 different ways to translate the legal concept of a trust to Spanish. Figuring out which meaning to use in any given sentence requires a sophisticated understanding of American law, Spanish law, and the context of the specific document she is translating.
It’s a similar story to Marc Eybert-Guillon’s work localizing video games. His firm provides translation for a wide variety of text, from character dialog to the labels on in-game items like weapons or magic potions. Often he needs to translate a single word or short phrase — too little context for an unambiguous translation."
These people evidently don't know how to use ChatGPT properly. You can perfectly well ask it something like, "translate [single word] in the context of xyz" and it will give you a more accurate translation than just asking it to translate [single word] without context, as you might expect. Not that this makes it okay to use it for legal documents as I'm sure it isn't perfect - and when you're translating into a language you don't know yourself, you don't know if it's correct. Still...what these people are saying isn't really accurate.
As an example, I recently saw in a store a new Pringles flavor called "Las Meras Meras Habaneras." I wanted to know what this translated to - obviously something about habaneros, but I couldn't find anything for "meras meras" on google, other than "meras" meaning "mere", which doesn't seem quite right in that context. I couldn't find anything for "meras meras" as a Spanish idiom at all, other than a few seemingly-similar uses.
I asked ChatGPT to translate "Las Meras Meras Habaneras" and it told me it couldn't, because the phrase was not a logical statement in Spanish. Then I added, "For context, the term is a new flavor of Pringles."
ChatGPT promptly replied, "In that case, since "meras" means "mere" and also has the sense of "pure" or "true", the phrase likely means something close to "The true habanero flavor.""
ChatGPT can perfectly well take different context instructions and use them to modify its output. Maybe I should make a career of teaching people how to properly prompt ChatGPT.
The taxi cab analogy would seem to undermine the “don’t worry too much” thesis--a lot of cabbies ended up committing suicide after the arrival of smartphone apps! Taxi medallion debt might be considered a special case, but a lot of knowledge workers enter the market deeply in debt from college loans.
So the thesis is that this is merely the next step of automation. Nothing more nothing less. If so it’s still pretty bad for the workers imo, for reasons others laid out. However I’d like to point out something else. What automation does- I think- is hollow out the middle ground. It takes a service previously available to the middle class, or to the average person in moderate amounts, and converts it to a shittier but good-enough product available in abundance, whereas the previous, higher-quality product which was “normal” for middle class people becomes a status symbol of the super rich. There are pros and cons for this process but we certainly lose out something, esp the “we” who are in the middle and upper-middle (but not super rich) strata of society.
On the economic disruption given by AI, the doomers tend to think there's going to be a massive spike in disruption one day where unemployment just skyrockets to double digits sheerly due to AI. Isn't a more likely outcome just a slightly increased rate of disruption? As far as the economic impact of AI is concerned, I'm not super worried this isn't something the Fed can't handle.
As for other risks, that AI could create something undesirable, I'm a little bit more worried, but I still feel like I've seen this movie before, and betting against doomers is usually a good bet.
I work in video games and the breakneck speed at which many of the web3 crowd jumped on to the AI bandwagon tells me there is more than a little scamming going on at the moment.
It is changing the industry but it’s far less capable than a tech bro would enthusiastically tell you over coffee at Urth cafe.
The generative voice stuff is hilariously bad at times. I did a demo at GDC where the guy kept telling me “oh don’t talk to that guys, he’s not working right now” sounds like a compelling and rich world you built there partner!
I know it’s improving and maybe it does take a bunch of jobs but the disconnect from where it is to what they are currently promising is pretty massive.
I was having a related discussion the other day (on another blog) about foreign language study.
This piece highlights some of the shortcomings of the current state of machine translation. And those shortcomings are real. But think where we were with this stuff eight or ten years ago. And imagine where we'll be five or ten or (gulp) twenty years from now. It's not hard to imagine—I personally don't see what's to stop their arrival—systems sufficiently powerful to provide flawless, highly automated, real time *interpretation*.
I suspect foreign language learning may (somewhat sadly, to be sure) one day go the way of cursive writing instruction: it'll be hard to justify devoting time to studying a foreign language when a machine can do it for you. Perhaps in the fullness of time this dynamic may even be the one force that impedes the relentless conquest of the world by the English language.