This is a very interesting article. I think I need more context on the idea that there's downward wage pressure based on the description in the article. If you add 19% more jobs at below median wage then you can lower the statistical median without incumbents having been affected. I guess it depends on whether they're just adding low quality work that wouldn't have existed before as mentioned in the article or whether this new low wage work replaces the high wage work over the long run.
Yes, this is a great distinction to draw. It could be either that existing translators are seeing downward pressure on their wages or that the industry is drawing in new workers who make lower wages. I don't know of any research trying to tease apart these effects. Mark Hemming comments make me think that there's probably at least some downward pressure on wages, but that's only one perspective so it could be wrong.
I guess another thing that could be happening is that any expanding field sees median wages go down because early career professionals have lower wages but that this evens out when they get more experience.
Yeah. It's also possible that wages were already trending down for other reasons. For example, the Internet may be making it easier for American companies to hire translators in India or Soth Africa or wherever.
This is a great guest post! I know some of the initial guest posts got a rocky reception because they were lower quality hot takes, but I would love to see more of this kind of analytic post.
AI kind of scares the shit out of me. It can already do so many things and it is as "bad" as it will ever be.
Working on the Yang campaign AI was cited as the motivation behind the UBI idea. I was always a bit skeptical of this because the $12,000 amount wasn't really a job replacement number as much as a safety net or caretaker's spending money number.
It does seem like all but the most AI-enthusiastic have become a little bit down of the upsides of AI and how much having powerful creative computer tools can mean for the average person. This has become a flashpoint in the WGA and SAG contracts because people fear being replaced by AI writers and actors but it seems clear the world will be richer culturally if we have endless stories to build off of and if any creator can make a film without having to hire the full crew of actors and crew that it currently takes. The recent Spider-Man movie had a sequence from a young kid who kid some fan animation and that kid wouldn't really be able to do that in live action at the moment.
All that said, we should really think about what we are going to do when there are either less jobs or when people are forced to switch jobs/industries more often because AI takes over. The left's focus on "workers" sometimes means that, at least rhetorically, we are focused on protecting jobs and work rather than just the people that would do those jobs.
Bernie's campaign website said "Anyone who works 40 hours a week in America should not be living in poverty." I would wager that he doesn't think people who don't work 40 hours a week should be living in poverty either. There are a ton of reasons beyond money people hate losing their jobs because there is often a sense of purpose and community that people get from work as well.
If we want people to embrace AI we really need to work on better and faster ways for people to transition from job to job and industry to industry. This means both education and monetary support as well as lowering the barriers for some jobs. I would also be interested in people trying to innovate on the worker/job matching problem because most current job sites are kind of a nightmare to use.
I can certainly understand why they'd want it, and Matt Y like every(?) other figure on the left I'm aware of supports the WGA, but I struggle with union demands that effectively amount to a refusal to improve productivity by producing more content with less workers.
Whether it's writers or construction workers. I think we're a ways a way from the problem being *too much* labor saving technology.
In the case of the WGA they just think they might be replaced by AI rather than be a useful tool for them.
It leaves them in a weird place where they want to both say that AI writing is not good(because if it is good people might want it) but also that studios will replace them with it to save whatever small part of the budget usually goes to writers. They are able to do this by saying that studios will use it even if AI writing is bad because they are greedy. Playing to people's cynicism about studio execs is a useful move in the PR battle but ultimately doesn't really answer the question of what AI could do for the industry.
You need to convince the consumers they will get a worse product because the owners of "capital" don't care about "quality" and are going to screw you to save a buck. Even if you get a fraction of the savings still not worth it to you.
Which, to be fair, has some merit to it... if we're talking about say nursing home attendants. However the entertainment industry seems like it might be the worst possible example.
It could be that unionizing boosts Chipotle employee morale and that boosts burrito quality. But it boosts their morale mostly because it boosts pay/benefits in way that increases the cost too.
Chipotle might not just pay more because they don't believe the higher quality is worth the extra cost and they think consumers might agree.
I think it's plausibly "accurate" for the union to say that "without us, the quality will go down"(and of course elide the fact that prices could go down too) but that doesn't mean consumers might not take the deal if given the choice.
So, while you will definitely see writers saying things like that on twitter. That is not, at least to my understanding, what the WGA is actually asking for in regards to AI.
I'm an editor not a writer, so I've been following this but could absolutely be missing something so anybody please correct me if I'm wrong, but
The only thing in the WGA Pattern of Demands that deals with AI is a request that AI generated scripts cannot be considered "source material." That is, you can use AI to help write a script, but if you do so you cannot claim that the AI is the Author of the script and the human writer who completes it is merely "adapting" the material. It actually does not address at all any scenarios where AI scripts fully replace human writers. If all the WGA demands were accepted today, you could have GPT write a script and take it straight to production without breaking any union rules.
Currently, I think the WGA demands on this basically make sense. AI tools are absolutely at the point where they can help a writer or a writing team work through a script faster but they really aren't at the point where they can put out a Hollywood script without a ton of work, and it's hard to argue that that level of work doesn't constitute authorship.
They just want to avoid the situation where a producer, instead of just pitching asking a writer to write a Zombie Horror movie, asks GPT4 to write the movie and then pays a writer for a rewrite or adaptation, which are much lower paying tasks.
Now, I could imagine that after the current collective bargaining agreement ends in another three years, AI will have advanced to the point that it's much, much less work to take a GPT script and turn it into something producible. Then it might make more sense to claim that a human writer is merely doing a rewrite pass. But right now I think the WGA has a pretty good official position on this even if various members may be saying dumb stuff at times.
I'm not that close to it but my understanding is "minimum number of writers" for "minimum amount of time" is still a core push? Couple with David's comments bellow about literally not being allowed to use AI for just about anything and I have trouble being sympathetic. Seems like pure protectionism under the guise of 'quality'. I'm sure many of them are sincere... that's how rationalization works.
"The guild argues that studios are squeezing more work out of fewer writers over a shorter time span, and paying them less than they’re entitled to. And the union’s leadership believes that it’s time to set basic standards around the size and duration of a writers’ room."
edit - I shouldn't say I'm not sympathetic. I actually am even if it doesn't sound like it. Job insecurity sucks. I just don't see how in this case it justifies overriding a general goal of flexibility towards increasing productivity.
Oh yes, you're definitely right that keeping fully staffed writing rooms regardless of how many writers are actually required is basically the core argument. The AI part frankly isn't that big of an issue in the actual negotiations going on, it just gets discussed a lot more because it's a sexy topic. Most of it is standard collective bargaining stuff and everything you said applies.
My point is just about the ai specifically. It's just not true that you are not allowed to use AI for anything. In fact quite the opposite.
"Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI."
There is a process used to determine authorship/credits between writers so you could just use that same process to determine if the AI would get credit if you wanted to.
I think their demands make sense in the same way any union's demands do. They want to increase the number of jobs and pay for those jobs that their members can do.
I don't know that they make sense from a broader point of view aiming to maximize the utility of technology but I am not sure that they are trying to.
I think that it’s pretty easy to square the circle here if you acknowledge the possibility that studios wildly overestimate how good AI-generated content within the current paradigm can be, try to switch over to LLM-generated production to save on costs, and only return to methods that work well after they start losing money (at which point the writers will have been facing hardship for some time.) Sort of like how the pivot to reality TV in the late aughts played out.
The WGA's demands also frankly seem suspect because, as far as I'm aware, screen writers aren't required to track their time to the tenth of an hour with task codes like lawyers or to only write in supervised environments where people can make sure they are doing the work themselves. Thus, it seems quite plausible that *writers* in many situations will make liberal use of AI writing to speed up/enhance their own creative processes, and which raises the question of why studios, etc. can't just eliminate the middle man and formally hire people to "polish" AI-produced content. (Just like there are already script doctors/polishers who work over human-produced content.)
The WGA demands don't prevent anyone, either writers or the studios, from using AI scripts. It only prevents studios from downgrading a job from "screenplay by" to "adaptation by" because of the use of AI tools.
My understanding on the WGA thing is that it goes back to how a job is defined: writing a screenplay from scratch is a higher rate than reworking or adapting existing material. The fear is that to avoid paying the higher rates, studios will have AI churn out material that they can then have writers adapt, thus avoiding paying the normal rate. I don’t think it’s unreasonable for a group to say “nope,” when faced with that prospect, because you can reasonably look at that and say “you’re just using this as a way of sidestepping paying the rate you ought to pay.” I reasonable accommodation might be to just say that adapting AI produced materials goes at the same rate as original works.
I understand that's the fear, but my point is going to how do the studios know in the "Age of AI" that a given writer is writing from scratch versus just adapting AI created work that the writer themself prompted? My knowledge of day-to-day screenwriter work is limited, but, at least as it's depicted in TV shows and movies, it appears that screenwriters do a lot of work outside any sort of traditionally monitored workflow situation, e.g., working from home at night/on the weekends. (Obviously, those depictions could be false, but I would expect screenwriters to be more accurate when depicting the process of screenwriting than a lot of other things!)
To put it another way, you hear stories in interviews and Hollywood biographies about screenwriters doing things like checking into a hotel and slamming out a substantially complete feature film script in a three-day fit of writing. If the screenwriter punches in a prompt to ChatGPT on Friday night before a long weekend, then spends four to six hours revising it over the weekend, and comes in with a feature-length script on Tuesday, is there anyway the studio can know what amount of work was actually performed? (My impression has always been that, outside of TV shows with regular writing teams, screenwriters are paid more on a piecework basis than an hourly basis with timecards, etc., but maybe that's incorrect on my part.)
People (Matt is one) bring up sectorial bargaining but I've never understood why that would help? Doesn't the WGA/SAG basically cover the entire "sector"? What would change under that model in this "Sector" and how would it improve productivity?
Would it be better if *ALL* the US Ports were covered under the same bargaining agreement that slows up all the automation on the west coast?
I don't think I fully understand the topic and I should go back and search on the argument why this isn't simply a labor tactic to increase bargaining power. Which is fine if that's your goal, but don't see how it helps with efficacy.
The video game example demonstrates just how more serious makers have become at getting translations correct. It's a far cry from comparing the mistranslations of games I played as a kid in the 1980s and 1990s, a world in which phrases like "All your base are belong to us" were tolerated.
As someone who works overseas regularly, I can’t wait until AI real time translation happens.
But... I wouldn’t be optimistic for these translators. Every time they edit a machine translation, the AI is learning that much more. They think they are editing, but really they are just teaching.
Back to real time actual translation. It is going to make the dating game so much easier. Everyone’s potential partners will expand by billions.
There’s a case to be made that changing the dating pool from the few dozen people you see in bars and work and through friends, to everyone within a ten mile gps radius on a dating app, may have made people’s dating experience worse. When you have more people to choose between, your standards yet higher - and perhaps more importantly, *their* standards get higher too, so everyone takes far longer to find a reasonable match, and ends up with one not much better. Expanding to the whole world could make that dynamic that much worse.
Being on Hinge right now, I think this is very accurate.
Dating apps also expose more of us quickly. Things that shouldn't matter (or that might matter but can be overcome by other positive attributes) are now front and center; examples include job title, hometown, astrological sign, and marijuana use. This encourages snap judgments.
This sounds to me similar to the internet optimism of the early 2000s. By contrast I think it might exacerbate the current problems of the internet. If we can all talk we can all fight.
Re dating, since most people meet via dating apps these days and a lot of initial communication starts with exchanging text messages, it won't be long before a variant of ChatGPT* will be generating witty, charming banter for you to copy and paste into the text box.
* Yes, of course the trademarked name for this will be "Cyrano."
"since most people meet via dating apps these days..."
I don't think this is correct. Pew says that only 53% of under people under 30 report having EVER USED a dating site or app. I suspect the majority of ways people meet is still someone you knew from school, work, church, met through mutual friends, etc.
As someone who has been married for a while, I've not used a dating app or site, but anecdotally I know plenty of people who have and success is "mixed" at best. Even for people using such tools, many of them ended up with someone they met through other ways.
That’s something I confidently predicted (in a discussion forum pretty much like this one) twenty years ago. Responses to the idea were almost entirely skeptical.
It’ll happen one day. But imagine the potential bumps in the road when a less-than-perfect translator chooses the wrong word or idiom!
"Every time they edit a machine translation, the AI is learning that much more."
What's the mechanism by which the data is being fed back into the machine? For legal docs/etc, or video game text, we don't put these out on public networks.
I mean, it'll get better over time anyway but unless you're feeding back the translated text with the source text how is it improving based on _your_ translation?
The pros often use integrated software where the human corrections are made inside the translation UI. Therefore the company that makes the software can use these corrections to improve its algorithm.
I just have this dreadful feeling that we're totally unprepared for what AI is going to do to us (or what we're going to do to each other because of AI). Maybe job loss & disruption won't be as bad as we fear, maybe it won't annihilate us. Nonetheless I think we'll soon be living in strange times.
We've barely got a grip on social media & smartphones. AI will be far more disruptive.
Spot on, I can think of countless examples where this is true for me as a scientist. Need an old patent or paper translated from German or Japanese to get the relevant protocols? Google Translate does the job. But need a patent *written* in a different language? Pay the human.
We have a lot of conversations as scientists around what AI will mean for our jobs, because a lot of the issues here apply to other professions. Can we ever automate drug discovery end to end? Not a chance, there's too much nuance to the human body that even the computational models we have are weak approximations (and those don't account for anywhere close to every aspect of human biology). What about chemical synthesis? The planning software has gotten much better, but it can't yet account for all of the different stereoelectronic effects in your substrate and all of the side reactions you might see. Maybe it'll get closer, but there will always be the need for the human to optimize based on purification ease or yield or whatever.
I found listening to some of the ai generated audio instructive. I get that the use of AI here might have been a sort of “meta” demonstration but as such it really highlights the shortcomings. It’s understandable but pretty bad. Hearing eg “you-ber” (Uber) is distracting, and the reading is monotone, making comprehension marginally harder and satsifacito somewhat lessened. The whole thing is less than 13 min at normal speed. Even allowing for some editing and error correction, it shouldn’t have taken the author more than say 30min to record himself actually reading it. I think it would have been worth the effort to do so in terms of the impression left on the listener. I imagine this kind of “penalty” in terms of user experience can be modeled by economists. I wonder whether that will limit the full takeover if AI or if we are merely going to gradually lower our standards and expectations of quality from a whole bunch of services.
Thanks for the feedback. I think it would take me more than 30 minutes to produce a high-quality recording because every time I misspeak (which I do fairly often) I have to go back and re-record a sentence. Also a significant portion of the prep time was finding and copying over the source quotes. Plus it was my first time using the software. I expect that once I know the tools well I'll be able to do it in about an hour.
Still I agree about the audio quality. I also noticed the You-ber problem. The question is whether we'll continue to see quality improve. This stuff is a lot better than it was five years ago, so in five more years perhaps You-ber type gaffes will be a thing of the past.
I think that’s the key question. And also I should say that I appreciate the “meta” quality of using the ai for this particular piece, but generally speaking my hunch is that at least for shorter pieces like this actually recording yourself might be the better choice- at least for now.
Uber is highly subsidized by investors and runs at a deep, deep loss. The prices they charged were never realistic.
LLMs like ChatGPT are also highly subsidized. The training is extremely expensive and even the per-query costs are quite high. Maybe the chips will eventually get cheap enough to break even on ads, but I doubt it.
I don't really agree with this. I don't know if these neural networks are breaking even right now, but I don't think there's much doubt that Moore's law will make them profitable at scale. Amazon Web Services has been obscenely profitable for several years now despite steadily cutting their prices.
AWS is profitable because web dev is insanely cheap. I run a medium sized news site and CPU and bandwidth are free. All the money we spend is for database hosting and image resizing. So far at least, LLMs take orders of magnitude more computing power. A good person to talk to about this is Tim Bray. He used to be an engineering VP at AWS. He had some throw away line on Mastodon about how you can feel all the compute being burned by ChatGPT. The fact is that even for ChatGPT 3.5, responses are super-slow. The reason that responses are slow is that there is a huge and economically unviable amount of compute being thrown at them. In the long run, yes, Moore’s law will probably make it viable, but if the minimum LLM experience people expect is even more expensive, it might all just wash out. As it is, the CPU used by bigger LLMs is scaling up faster than Moore’s law. Again, everything can change, but it’s also wrong to just assume that it will all take care of itself.
My understanding was that it takes enormous computing power to train LLM, but not so much to run it after the fact. So it could be unprofitable to train it initially, but then its quite reasonable to use it.
I'm sure there is more to it than that, but would be interested to see you lay out more details.
The training vs inference very much depends on the nature of the product itself, and how popular it is. Training is expensive, but inference does not yet have "zero marginal cost" economics. That makes a big difference when compared to the traditional cloud model.
In Nov 2022, MidJourney was running into issues with cloud capacity to keep up. 90% of their cloud costs were from inference, and only 10% from training. On top of gating usage by price, they still had to (and continue to) rate-limit usage of premium users.
In the 8 months since then, both the number of "users" and "online now" (both include lurkers) has increased 4x -- to 16.65M and 1.44M respectively. Despite the increase in traffic, I have heard from some longtime users that user experience, especially wait times, have improved. To me this indicates that their inference costs are going down. But they are still at the order of 100x smaller than Twitter and 1000x smaller than FB.
Here's what Bard says about operating costs for ChatGPT.
If we assume that ChatGPT uses 8 GPUs to operate, and that each GPU costs $3 an hour, then each word generated on ChatGPT costs $0.0003. At least 8 GPUs are in use to operate on a single ChatGPT, and each question typically generates around 30 words. This means that the per question cost of ChatGPT is around $0.009, or 9 cents.
However, the actual per question cost may be lower than this. For example, if ChatGPT is able to reuse some of the computing resources from previous questions, then the per question cost will be lower. Additionally, if ChatGPT is able to be more efficient in its use of computing resources, then the per question cost will also be lower.
Moore’s law is going to run out of steam in the middle of this decade (and may already have run out of steam—Nvidia’s Jensen Huang, who would probably know, pronounced it dead in 2022) because of limits imposed by physics— you can only cram so many circuits into a silicon wafer. We’ll still get improvements in compute from improved parallelization and specialized chip designs optimized for performing specific tasks, but the late 20th century’s exponential improvements can’t scale.
I think that with the right set of optimizations, running LLM instances will be cost-effective for a lot of tasks, but the “just 10x the number of parameters” strategy for improving performance will stop being viable (because of both training and operating costs), so we’ll probably see a ceiling on their sophistication until there’s some sort of major paradigm shift. At the moment, I think LLMs are on track to be a useful and commercially important but not world-shattering tech.
All way beyond my pay grade, but I keep reading claims we'll be able squeeze more time out of Moore's law because of better software and better materials. Also quantum computing?
QC is completely irrelevant here, anyone who says this is BSing. Software is sort of orthogonal to Moore - the point of Moore* is you don't need to pay programmers to optimize things, everything gets better automatically
This is the way I've been operating for some time. I'm quite fluent in Spanish but when I have a long or complicated letter or document I need to write, I draft it first in English, Google it into Spanish, and then revise.
This is how I have been experimenting with ChatGPT. I obviously correct the things that are factually wrong, but I have also found the generated text to be way too flowery and corny for my style. But this has got me thinking that maybe my New England bred spareness in my writing may not be the most effective.
Probably just in the same way that, seeing a draft written by someone else, I can see the differences between how they put it and how I did, and in many cases see why their way of putting it might be more effective, even if I wouldn’t have thought of it that way.
The taxi cab analogy would seem to undermine the “don’t worry too much” thesis--a lot of cabbies ended up committing suicide after the arrival of smartphone apps! Taxi medallion debt might be considered a special case, but a lot of knowledge workers enter the market deeply in debt from college loans.
What I wrote was "workers in other industries don’t need to worry about AI taking over their jobs overnight." Certainly I think it's reasonable for workers to worry a little bit and to plan accordingly. And yes, I think people should think twice before accumulating a lot of debt, though I don't think the value of a college degree is going to crash the way a taxi medallion's value did.
Agree that most college graduates shouldn’t worry, though I do wonder about translators. The AI shock may not happen all at once, but the tech also keeps getting better. An industry can appear to be declining slowly and then experience something like a “cliff moment.” Probably in this case it will take care of itself through attrition--hard to imagine many people are entering college today with the express ambition of becoming a translator.
Technology always causes this sort of upheaval - there were a heck of a lot more railroad employees before cars came around. No one is owed a perfectly stable & perfectly secure career, and certainly no one is owed a protected sinecure when technology has made their job unnecessary.
The two situations you've mentioned, where a personal debt load makes the loss of career particularly difficult, are not the fault of technology at all, but rather of some unusual societal frameworks. Technology can hardly be blamed for stressing those already-odd frameworks.
Another thing is...there are options. While student debt can't be purged in bankruptcy, taxi medallion debt certainly can. Now bankruptcy does take a toll on your life - but it's not like there's no way to get out of that debt but suicide. We don't live in Dickensian Britain.
Even with student debt, what happens if you don't pay? Your credit score tanks and they'll start docking your paychecks. So what? It sucks, but it's not like they come harvest your kidneys to recoup the money. My grandfather got an MBA in his 40s and never paid a cent back he didn't have to. When he died, the poor bastards were still docking his social security checks for the student loan debt, and hadn't recouped even 20% before he kicked it. There's a way to die "up."
The thing about these kinds of technologies is that they help people with fewer skills and they hurt people with specialized skills. Unfortunately, we know that most knowledge workers have an above average skill level relative to their peers in knowledge work, so the adjustment is going to be quite rough.
Accepting arguendo that the number of cabbie suicides triggered by competition from Uber, etc. qualifies as "lots" (this is one of these things where it is completely unclear what the "normal rate" is, i.e., how many cabbies commit suicide over a comparable length of time when ridesharing smartphone apps weren't available, so we can't actually say whether there's been a statistically significant increase), it appears that the victims were overwhelmingly, if not exclusively, immigrants to the US: https://www.nytimes.com/2018/12/02/nyregion/taxi-drivers-suicide-nyc.html
Given the paucity of native-born Americans committing suicide in this context, it seems reasonable to think that the suicides were driven as much by a lack of understanding of the US bankruptcy system, US tax system, and/or possibly fear that loss of their business might not just have financial consequences, but lead to them being deported if they aren't naturalized US citizens. I'm skeptical the same pressures apply to most knowledge workers, especially when income-based repayment plans for student loans and public service loan deferrals/forgiveness are already an established thing.
I don't want to claim (and don't think I did claim) that nobody is going to face hardship as a result of AI progress. Economic change is always stressful for people and I don't want to minimize that. I just think a lot of people are overestimating its likely severity.
"“If you put ‘trust’ in ChatGPT it's going to translate it to confianza,” Leon said. “But that's not what it means.” In reality, Leon says, there are 20 or 30 different ways to translate the legal concept of a trust to Spanish. Figuring out which meaning to use in any given sentence requires a sophisticated understanding of American law, Spanish law, and the context of the specific document she is translating.
It’s a similar story to Marc Eybert-Guillon’s work localizing video games. His firm provides translation for a wide variety of text, from character dialog to the labels on in-game items like weapons or magic potions. Often he needs to translate a single word or short phrase — too little context for an unambiguous translation."
These people evidently don't know how to use ChatGPT properly. You can perfectly well ask it something like, "translate [single word] in the context of xyz" and it will give you a more accurate translation than just asking it to translate [single word] without context, as you might expect. Not that this makes it okay to use it for legal documents as I'm sure it isn't perfect - and when you're translating into a language you don't know yourself, you don't know if it's correct. Still...what these people are saying isn't really accurate.
As an example, I recently saw in a store a new Pringles flavor called "Las Meras Meras Habaneras." I wanted to know what this translated to - obviously something about habaneros, but I couldn't find anything for "meras meras" on google, other than "meras" meaning "mere", which doesn't seem quite right in that context. I couldn't find anything for "meras meras" as a Spanish idiom at all, other than a few seemingly-similar uses.
I asked ChatGPT to translate "Las Meras Meras Habaneras" and it told me it couldn't, because the phrase was not a logical statement in Spanish. Then I added, "For context, the term is a new flavor of Pringles."
ChatGPT promptly replied, "In that case, since "meras" means "mere" and also has the sense of "pure" or "true", the phrase likely means something close to "The true habanero flavor.""
ChatGPT can perfectly well take different context instructions and use them to modify its output. Maybe I should make a career of teaching people how to properly prompt ChatGPT.
I don’t see how this is relevant here. The context for the legal translation is some substantial part of the whole corpus of Spanish and English law, including professional know-how that isn’t explicitly written down anywhere. I can believe you can get ChatGPT to give you a list of all the ways to translate “trust” into Spanish, but not at all that you can get the legally correct one without writing a prompt just as expert as the human translation.
As I said, a legal context requiring very specific correct terminology is different - but it's not correct to say that ChatGPT can't understand context to some degree. It can, it just won't be 100% tight 100% of the time.
Certainly for something like translating items in a video game, if you tell it to translate "stick" with no context, it could say "adhere" as easily as "a small piece of wood", but if you give it the context that it's something someone is holding and hitting things with, it's probably going to give you an accurate translation of "stick" for that context. Probably.
But all you'd need to translate an English text fluently - if not precisely - into Spanish is ChatGPT plus a Spanish speaker to check that the translated terminology is correct in context. You don't actually need anyone fluent in both languages - just someone fluent in the target language. If there's ambiguity, and the Spanish speaker wants to ask the English writer to clarify something, they can most likely use ChatGPT to translate their communications as well.
Sure, the output isn't guaranteed to be perfect - which means it isn't suited to a legal context - but "mostly perfect" is certainly good enough for a task like translating a video game, where exactly-correct language is rarely crucial. Heck, I sometimes play video games in German (which I don't speak) just for kicks, and it doesn't make it that much harder to understand what's going on.
I don't understand why this process of "gather necessary context, give it to ChatGPT, take the output and share it with a native Spanish speaker who isn't a translator, possibly have a back-and-forth with the Spanish speaker to figure out the best word" would be more efficient than paying a professional translator. It sounds like you're replacing one work with two for no good reason.
There are a lot more people who speak one language than two, so their labor will be less expensive, for one thing - especially in the context of translations between, say, Finnish and Chinese, where bilinguals are a lot rarer than English-Spanish.
This is assuming that the double-checking is even necessary, which outside of a legal context, it usually isn't. In the context of a video game with a medieval fantasy setting, for example, you would say to ChatGPT, "Given the context of a video game in a medieval fantasy setting aimed at teens, translate the following list of strings", and you would probably get context-correct output in most cases.
A bigger concern than accuracy, I think, would be tone, but you could control that as well by asking for translation "in the style of Gabriel Garcia-Marquez" or "in the style of Miguel de Cervantes", as appropriate for the video game or other piece of media in question.
This would probably cover all the needs of someone making a low-budget video game for whom saving on translation while gaining access to other-language markets, at the possible cost of some accuracy, would be an excellent trade. For a bigger-budget video game, where a large developer wants to be careful to avoid any output that could be perceived as insensitive or problematic, you'd probably want the output read over by native-language speakers even if you did pay for a professional translation, since translators themselves aren't perfect. So in either case ChatGPT is at least as useful as a human translator, and cheaper - and not limited to one or a few languages as most human translators are.
So the thesis is that this is merely the next step of automation. Nothing more nothing less. If so it’s still pretty bad for the workers imo, for reasons others laid out. However I’d like to point out something else. What automation does- I think- is hollow out the middle ground. It takes a service previously available to the middle class, or to the average person in moderate amounts, and converts it to a shittier but good-enough product available in abundance, whereas the previous, higher-quality product which was “normal” for middle class people becomes a status symbol of the super rich. There are pros and cons for this process but we certainly lose out something, esp the “we” who are in the middle and upper-middle (but not super rich) strata of society.
What do you mean by "previously available to the middle class?" I don't see any reason that the introduction of AI would dramatically raise the cost of an old-fashioned translation. It just makes good-enough AI cheap enough that the high cost of a human translation is no longer worth it for most purposes. But people can still pay it if they want to.
Isn’t that what follows from your own prediction? If translation will die as a massive industry it will be much harder for the middle class person to find a human translator at a decent price, especially one who knows what they’re about. It’s easy to buy pretty good furniture from ikea but I bet it’s far more expensive to have custom made high quality furniture by a highly skilled carpenter than it was a century ago. Cooking for yourself at home and eating out have both become cheaper thanks to technology but how many middle class people can afford their own full time cook? Having a private secretary, fully qualified to write your correspondences etc used to be standard for professionals. Doubtless far more people have access to chat gpt and phone calendars than ever had to secretaries, but a highly skilled, human, “personal assistant” (or however they’re called nowadays), which is still obviously better than all the tech put together, is on the trend to becoming the mark of the upper echelons Etc
P.S.
I’m not nostalgic for the past. I think that- thus far at least- rise in productivity has been a *net* good on the long term. All I’m pointing out is that even if it’s a net good for society, some of us are permanently losing out on some things, and that’s true even from the consumer’s perspective, not just the worker’s.
It's more expensive to hire a carpenter than 100 years ago because wages in general have gone up, and so carpenters' incomes have gone up along with everyone else's. Ditto for cooks, secretaries, etc. But this isn't because living standards have gone down. Quite the contrary. The "middle class" of 100 years ago that could afford maids and cooks were in the top 5 percent if not the top 1 percent of the income distribution. They were able to afford this kind of labor because most people had very low wages and so there was a lot of surplus labor around.
The people we call the middle class today are a completely different slice of the income distribution, from say the 40th to 90th percentile. People in that portion of the income distribution in 1923 would not have had servants. They just had a much worse standard of living due to the lack of washing machines, vaccuum cleaners, Ikea, etc.
So yes, rising wages have made life worse in some ways for people in the 95th to 99th percentile of the income distribution because they have to get by with fewer servants. But it's been good for people in the bottom 90 percent of the income distribution who couldn't afford servants in 1923 and can't afford them now. Overall it seems like clear progress to me.
In 1920 only 22 percent of white people aged 25 to 29 had high school diplomas and only 4 percent had a college degree. So people look back, read that that it was common for people with college degrees to have servants, and conclude that living standards have fallen. But in reality people who had college degrees in 1923 were a totally different slice of the income distribution than people with college degrees today. Having a college degree in 1920 made you a member of a tiny elite, and they could easily afford servants because pay for non-college graduates was very low.
Today, many more people have college degrees (35 percent of all young American adults) and median wages are a lot higher. So unsurprisingly most college graduates today can't afford to hire servants. That's because prosperity is far more widely shared than it was a century ago.
You’d notice that you didn’t answer my question, unless you assume that *only* people with college degrees had servants? Anyway you keep harking back to an uncontested point (aka straw man). I ask again, did you read my ps? We’re in agreement that society today is better off. That’s hardly the point.
Most, or perhaps all of your examples are cases where the middle class was priced out by Baumol's cost disease, which is triggered by productivity improvements in _other_industries.
"I bet it’s far more expensive to have custom made high quality furniture by a highly skilled carpenter than it was a century ago."
Is this actually true in real terms? I think the difference is that 100 years ago you could buy really crappy products or super expensive well made products, but there wasn't very much decent stuff in the middle.
You can still buy the super expensive stuff now, but most people don't want to. Similar to other parts of fashion, people now often will want to switch to new looks or designs after a decade or so instead of buying super expensive furniture and having it for life.
But, in order for translation to die out, that means AI translation has to get a lot better.
It won't be "you have to settle for current gen AI translation because you can't afford one of the translator specialists that you previously could have afforded", it'll be some much better translation.
Basically, when you say: " It takes a service previously available to the middle class, or to the average person in moderate amounts, and converts it to a shittier but good-enough product available in abundance"
If it was good enough to kill most of the translation industry then I'm not convinced it will be meaningfully worse than what you could afford now, so I think the word "shittier" is wrong there.
I work in video games and the breakneck speed at which many of the web3 crowd jumped on to the AI bandwagon tells me there is more than a little scamming going on at the moment.
It is changing the industry but it’s far less capable than a tech bro would enthusiastically tell you over coffee at Urth cafe.
The generative voice stuff is hilariously bad at times. I did a demo at GDC where the guy kept telling me “oh don’t talk to that guys, he’s not working right now” sounds like a compelling and rich world you built there partner!
I know it’s improving and maybe it does take a bunch of jobs but the disconnect from where it is to what they are currently promising is pretty massive.
This is a very interesting article. I think I need more context on the idea that there's downward wage pressure based on the description in the article. If you add 19% more jobs at below median wage then you can lower the statistical median without incumbents having been affected. I guess it depends on whether they're just adding low quality work that wouldn't have existed before as mentioned in the article or whether this new low wage work replaces the high wage work over the long run.
Yes, this is a great distinction to draw. It could be either that existing translators are seeing downward pressure on their wages or that the industry is drawing in new workers who make lower wages. I don't know of any research trying to tease apart these effects. Mark Hemming comments make me think that there's probably at least some downward pressure on wages, but that's only one perspective so it could be wrong.
I guess another thing that could be happening is that any expanding field sees median wages go down because early career professionals have lower wages but that this evens out when they get more experience.
Yeah. It's also possible that wages were already trending down for other reasons. For example, the Internet may be making it easier for American companies to hire translators in India or Soth Africa or wherever.
This is a great guest post! I know some of the initial guest posts got a rocky reception because they were lower quality hot takes, but I would love to see more of this kind of analytic post.
Tim has a long history of writing high quality articles, glad to see Matt introduce him to all Slow Borers with a guest article.
Thank you both!
AI kind of scares the shit out of me. It can already do so many things and it is as "bad" as it will ever be.
Working on the Yang campaign AI was cited as the motivation behind the UBI idea. I was always a bit skeptical of this because the $12,000 amount wasn't really a job replacement number as much as a safety net or caretaker's spending money number.
It does seem like all but the most AI-enthusiastic have become a little bit down of the upsides of AI and how much having powerful creative computer tools can mean for the average person. This has become a flashpoint in the WGA and SAG contracts because people fear being replaced by AI writers and actors but it seems clear the world will be richer culturally if we have endless stories to build off of and if any creator can make a film without having to hire the full crew of actors and crew that it currently takes. The recent Spider-Man movie had a sequence from a young kid who kid some fan animation and that kid wouldn't really be able to do that in live action at the moment.
All that said, we should really think about what we are going to do when there are either less jobs or when people are forced to switch jobs/industries more often because AI takes over. The left's focus on "workers" sometimes means that, at least rhetorically, we are focused on protecting jobs and work rather than just the people that would do those jobs.
Bernie's campaign website said "Anyone who works 40 hours a week in America should not be living in poverty." I would wager that he doesn't think people who don't work 40 hours a week should be living in poverty either. There are a ton of reasons beyond money people hate losing their jobs because there is often a sense of purpose and community that people get from work as well.
If we want people to embrace AI we really need to work on better and faster ways for people to transition from job to job and industry to industry. This means both education and monetary support as well as lowering the barriers for some jobs. I would also be interested in people trying to innovate on the worker/job matching problem because most current job sites are kind of a nightmare to use.
I can certainly understand why they'd want it, and Matt Y like every(?) other figure on the left I'm aware of supports the WGA, but I struggle with union demands that effectively amount to a refusal to improve productivity by producing more content with less workers.
Whether it's writers or construction workers. I think we're a ways a way from the problem being *too much* labor saving technology.
This article was something - https://www.nytimes.com/2017/12/28/nyregion/new-york-subway-construction-costs.html
In the case of the WGA they just think they might be replaced by AI rather than be a useful tool for them.
It leaves them in a weird place where they want to both say that AI writing is not good(because if it is good people might want it) but also that studios will replace them with it to save whatever small part of the budget usually goes to writers. They are able to do this by saying that studios will use it even if AI writing is bad because they are greedy. Playing to people's cynicism about studio execs is a useful move in the PR battle but ultimately doesn't really answer the question of what AI could do for the industry.
Yup, that is always the argument.
You need to convince the consumers they will get a worse product because the owners of "capital" don't care about "quality" and are going to screw you to save a buck. Even if you get a fraction of the savings still not worth it to you.
Which, to be fair, has some merit to it... if we're talking about say nursing home attendants. However the entertainment industry seems like it might be the worst possible example.
Those aren't incompatible though right?
It could be that unionizing boosts Chipotle employee morale and that boosts burrito quality. But it boosts their morale mostly because it boosts pay/benefits in way that increases the cost too.
Chipotle might not just pay more because they don't believe the higher quality is worth the extra cost and they think consumers might agree.
I think it's plausibly "accurate" for the union to say that "without us, the quality will go down"(and of course elide the fact that prices could go down too) but that doesn't mean consumers might not take the deal if given the choice.
So, while you will definitely see writers saying things like that on twitter. That is not, at least to my understanding, what the WGA is actually asking for in regards to AI.
I'm an editor not a writer, so I've been following this but could absolutely be missing something so anybody please correct me if I'm wrong, but
The only thing in the WGA Pattern of Demands that deals with AI is a request that AI generated scripts cannot be considered "source material." That is, you can use AI to help write a script, but if you do so you cannot claim that the AI is the Author of the script and the human writer who completes it is merely "adapting" the material. It actually does not address at all any scenarios where AI scripts fully replace human writers. If all the WGA demands were accepted today, you could have GPT write a script and take it straight to production without breaking any union rules.
Currently, I think the WGA demands on this basically make sense. AI tools are absolutely at the point where they can help a writer or a writing team work through a script faster but they really aren't at the point where they can put out a Hollywood script without a ton of work, and it's hard to argue that that level of work doesn't constitute authorship.
They just want to avoid the situation where a producer, instead of just pitching asking a writer to write a Zombie Horror movie, asks GPT4 to write the movie and then pays a writer for a rewrite or adaptation, which are much lower paying tasks.
Now, I could imagine that after the current collective bargaining agreement ends in another three years, AI will have advanced to the point that it's much, much less work to take a GPT script and turn it into something producible. Then it might make more sense to claim that a human writer is merely doing a rewrite pass. But right now I think the WGA has a pretty good official position on this even if various members may be saying dumb stuff at times.
I'm not that close to it but my understanding is "minimum number of writers" for "minimum amount of time" is still a core push? Couple with David's comments bellow about literally not being allowed to use AI for just about anything and I have trouble being sympathetic. Seems like pure protectionism under the guise of 'quality'. I'm sure many of them are sincere... that's how rationalization works.
"The guild argues that studios are squeezing more work out of fewer writers over a shorter time span, and paying them less than they’re entitled to. And the union’s leadership believes that it’s time to set basic standards around the size and duration of a writers’ room."
https://variety.com/2023/biz/news/wga-david-goodman-ellen-stutzman-meredith-stiehm-contract-1235556042/
...
edit - I shouldn't say I'm not sympathetic. I actually am even if it doesn't sound like it. Job insecurity sucks. I just don't see how in this case it justifies overriding a general goal of flexibility towards increasing productivity.
Oh yes, you're definitely right that keeping fully staffed writing rooms regardless of how many writers are actually required is basically the core argument. The AI part frankly isn't that big of an issue in the actual negotiations going on, it just gets discussed a lot more because it's a sexy topic. Most of it is standard collective bargaining stuff and everything you said applies.
My point is just about the ai specifically. It's just not true that you are not allowed to use AI for anything. In fact quite the opposite.
The section I found is here:
"Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI."
There is a process used to determine authorship/credits between writers so you could just use that same process to determine if the AI would get credit if you wanted to.
I think their demands make sense in the same way any union's demands do. They want to increase the number of jobs and pay for those jobs that their members can do.
I don't know that they make sense from a broader point of view aiming to maximize the utility of technology but I am not sure that they are trying to.
I think that it’s pretty easy to square the circle here if you acknowledge the possibility that studios wildly overestimate how good AI-generated content within the current paradigm can be, try to switch over to LLM-generated production to save on costs, and only return to methods that work well after they start losing money (at which point the writers will have been facing hardship for some time.) Sort of like how the pivot to reality TV in the late aughts played out.
The WGA's demands also frankly seem suspect because, as far as I'm aware, screen writers aren't required to track their time to the tenth of an hour with task codes like lawyers or to only write in supervised environments where people can make sure they are doing the work themselves. Thus, it seems quite plausible that *writers* in many situations will make liberal use of AI writing to speed up/enhance their own creative processes, and which raises the question of why studios, etc. can't just eliminate the middle man and formally hire people to "polish" AI-produced content. (Just like there are already script doctors/polishers who work over human-produced content.)
The WGA demands don't prevent anyone, either writers or the studios, from using AI scripts. It only prevents studios from downgrading a job from "screenplay by" to "adaptation by" because of the use of AI tools.
Thank you for that additional information. I haven't seen a detailed description of the demands.
My understanding on the WGA thing is that it goes back to how a job is defined: writing a screenplay from scratch is a higher rate than reworking or adapting existing material. The fear is that to avoid paying the higher rates, studios will have AI churn out material that they can then have writers adapt, thus avoiding paying the normal rate. I don’t think it’s unreasonable for a group to say “nope,” when faced with that prospect, because you can reasonably look at that and say “you’re just using this as a way of sidestepping paying the rate you ought to pay.” I reasonable accommodation might be to just say that adapting AI produced materials goes at the same rate as original works.
I understand that's the fear, but my point is going to how do the studios know in the "Age of AI" that a given writer is writing from scratch versus just adapting AI created work that the writer themself prompted? My knowledge of day-to-day screenwriter work is limited, but, at least as it's depicted in TV shows and movies, it appears that screenwriters do a lot of work outside any sort of traditionally monitored workflow situation, e.g., working from home at night/on the weekends. (Obviously, those depictions could be false, but I would expect screenwriters to be more accurate when depicting the process of screenwriting than a lot of other things!)
To put it another way, you hear stories in interviews and Hollywood biographies about screenwriters doing things like checking into a hotel and slamming out a substantially complete feature film script in a three-day fit of writing. If the screenwriter punches in a prompt to ChatGPT on Friday night before a long weekend, then spends four to six hours revising it over the weekend, and comes in with a feature-length script on Tuesday, is there anyway the studio can know what amount of work was actually performed? (My impression has always been that, outside of TV shows with regular writing teams, screenwriters are paid more on a piecework basis than an hourly basis with timecards, etc., but maybe that's incorrect on my part.)
People (Matt is one) bring up sectorial bargaining but I've never understood why that would help? Doesn't the WGA/SAG basically cover the entire "sector"? What would change under that model in this "Sector" and how would it improve productivity?
Would it be better if *ALL* the US Ports were covered under the same bargaining agreement that slows up all the automation on the west coast?
I don't think I fully understand the topic and I should go back and search on the argument why this isn't simply a labor tactic to increase bargaining power. Which is fine if that's your goal, but don't see how it helps with efficacy.
The video game example demonstrates just how more serious makers have become at getting translations correct. It's a far cry from comparing the mistranslations of games I played as a kid in the 1980s and 1990s, a world in which phrases like "All your base are belong to us" were tolerated.
As someone who works overseas regularly, I can’t wait until AI real time translation happens.
But... I wouldn’t be optimistic for these translators. Every time they edit a machine translation, the AI is learning that much more. They think they are editing, but really they are just teaching.
Back to real time actual translation. It is going to make the dating game so much easier. Everyone’s potential partners will expand by billions.
There’s a case to be made that changing the dating pool from the few dozen people you see in bars and work and through friends, to everyone within a ten mile gps radius on a dating app, may have made people’s dating experience worse. When you have more people to choose between, your standards yet higher - and perhaps more importantly, *their* standards get higher too, so everyone takes far longer to find a reasonable match, and ends up with one not much better. Expanding to the whole world could make that dynamic that much worse.
Being on Hinge right now, I think this is very accurate.
Dating apps also expose more of us quickly. Things that shouldn't matter (or that might matter but can be overcome by other positive attributes) are now front and center; examples include job title, hometown, astrological sign, and marijuana use. This encourages snap judgments.
This sounds to me similar to the internet optimism of the early 2000s. By contrast I think it might exacerbate the current problems of the internet. If we can all talk we can all fight.
Re dating, since most people meet via dating apps these days and a lot of initial communication starts with exchanging text messages, it won't be long before a variant of ChatGPT* will be generating witty, charming banter for you to copy and paste into the text box.
* Yes, of course the trademarked name for this will be "Cyrano."
"since most people meet via dating apps these days..."
I don't think this is correct. Pew says that only 53% of under people under 30 report having EVER USED a dating site or app. I suspect the majority of ways people meet is still someone you knew from school, work, church, met through mutual friends, etc.
As someone who has been married for a while, I've not used a dating app or site, but anecdotally I know plenty of people who have and success is "mixed" at best. Even for people using such tools, many of them ended up with someone they met through other ways.
I’ve already tried this. “Please generate a sincere living apology for my wife”
And . . . ?
(I assume you meant "loving" unless the infraction to be apologized for was subject to capital punishment.)
South Park did it. 😂
https://www.youtube.com/watch?v=hEk0Tas7xgE
“…the dating game…”
That’s something I confidently predicted (in a discussion forum pretty much like this one) twenty years ago. Responses to the idea were almost entirely skeptical.
It’ll happen one day. But imagine the potential bumps in the road when a less-than-perfect translator chooses the wrong word or idiom!
You don’t think there’s a pretty strong advantage to looking for partners who are within your general geographic area?
You obviously have never been to Colombia.
"Every time they edit a machine translation, the AI is learning that much more."
What's the mechanism by which the data is being fed back into the machine? For legal docs/etc, or video game text, we don't put these out on public networks.
I mean, it'll get better over time anyway but unless you're feeding back the translated text with the source text how is it improving based on _your_ translation?
The pros often use integrated software where the human corrections are made inside the translation UI. Therefore the company that makes the software can use these corrections to improve its algorithm.
Ahh, did not know that, thank you.
I just have this dreadful feeling that we're totally unprepared for what AI is going to do to us (or what we're going to do to each other because of AI). Maybe job loss & disruption won't be as bad as we fear, maybe it won't annihilate us. Nonetheless I think we'll soon be living in strange times.
We've barely got a grip on social media & smartphones. AI will be far more disruptive.
Spot on, I can think of countless examples where this is true for me as a scientist. Need an old patent or paper translated from German or Japanese to get the relevant protocols? Google Translate does the job. But need a patent *written* in a different language? Pay the human.
We have a lot of conversations as scientists around what AI will mean for our jobs, because a lot of the issues here apply to other professions. Can we ever automate drug discovery end to end? Not a chance, there's too much nuance to the human body that even the computational models we have are weak approximations (and those don't account for anywhere close to every aspect of human biology). What about chemical synthesis? The planning software has gotten much better, but it can't yet account for all of the different stereoelectronic effects in your substrate and all of the side reactions you might see. Maybe it'll get closer, but there will always be the need for the human to optimize based on purification ease or yield or whatever.
I found listening to some of the ai generated audio instructive. I get that the use of AI here might have been a sort of “meta” demonstration but as such it really highlights the shortcomings. It’s understandable but pretty bad. Hearing eg “you-ber” (Uber) is distracting, and the reading is monotone, making comprehension marginally harder and satsifacito somewhat lessened. The whole thing is less than 13 min at normal speed. Even allowing for some editing and error correction, it shouldn’t have taken the author more than say 30min to record himself actually reading it. I think it would have been worth the effort to do so in terms of the impression left on the listener. I imagine this kind of “penalty” in terms of user experience can be modeled by economists. I wonder whether that will limit the full takeover if AI or if we are merely going to gradually lower our standards and expectations of quality from a whole bunch of services.
Thanks for the feedback. I think it would take me more than 30 minutes to produce a high-quality recording because every time I misspeak (which I do fairly often) I have to go back and re-record a sentence. Also a significant portion of the prep time was finding and copying over the source quotes. Plus it was my first time using the software. I expect that once I know the tools well I'll be able to do it in about an hour.
Still I agree about the audio quality. I also noticed the You-ber problem. The question is whether we'll continue to see quality improve. This stuff is a lot better than it was five years ago, so in five more years perhaps You-ber type gaffes will be a thing of the past.
I think that’s the key question. And also I should say that I appreciate the “meta” quality of using the ai for this particular piece, but generally speaking my hunch is that at least for shorter pieces like this actually recording yourself might be the better choice- at least for now.
On the analogy of Uber:
Uber is highly subsidized by investors and runs at a deep, deep loss. The prices they charged were never realistic.
LLMs like ChatGPT are also highly subsidized. The training is extremely expensive and even the per-query costs are quite high. Maybe the chips will eventually get cheap enough to break even on ads, but I doubt it.
I don't really agree with this. I don't know if these neural networks are breaking even right now, but I don't think there's much doubt that Moore's law will make them profitable at scale. Amazon Web Services has been obscenely profitable for several years now despite steadily cutting their prices.
Here's an interesting article that describes fast-approaching limits to scaling:
https://asteriskmag.com/issues/03/the-transistor-cliff
AWS is profitable because web dev is insanely cheap. I run a medium sized news site and CPU and bandwidth are free. All the money we spend is for database hosting and image resizing. So far at least, LLMs take orders of magnitude more computing power. A good person to talk to about this is Tim Bray. He used to be an engineering VP at AWS. He had some throw away line on Mastodon about how you can feel all the compute being burned by ChatGPT. The fact is that even for ChatGPT 3.5, responses are super-slow. The reason that responses are slow is that there is a huge and economically unviable amount of compute being thrown at them. In the long run, yes, Moore’s law will probably make it viable, but if the minimum LLM experience people expect is even more expensive, it might all just wash out. As it is, the CPU used by bigger LLMs is scaling up faster than Moore’s law. Again, everything can change, but it’s also wrong to just assume that it will all take care of itself.
My understanding was that it takes enormous computing power to train LLM, but not so much to run it after the fact. So it could be unprofitable to train it initially, but then its quite reasonable to use it.
I'm sure there is more to it than that, but would be interested to see you lay out more details.
The training vs inference very much depends on the nature of the product itself, and how popular it is. Training is expensive, but inference does not yet have "zero marginal cost" economics. That makes a big difference when compared to the traditional cloud model.
In Nov 2022, MidJourney was running into issues with cloud capacity to keep up. 90% of their cloud costs were from inference, and only 10% from training. On top of gating usage by price, they still had to (and continue to) rate-limit usage of premium users.
In the 8 months since then, both the number of "users" and "online now" (both include lurkers) has increased 4x -- to 16.65M and 1.44M respectively. Despite the increase in traffic, I have heard from some longtime users that user experience, especially wait times, have improved. To me this indicates that their inference costs are going down. But they are still at the order of 100x smaller than Twitter and 1000x smaller than FB.
Here's what Bard says about operating costs for ChatGPT.
If we assume that ChatGPT uses 8 GPUs to operate, and that each GPU costs $3 an hour, then each word generated on ChatGPT costs $0.0003. At least 8 GPUs are in use to operate on a single ChatGPT, and each question typically generates around 30 words. This means that the per question cost of ChatGPT is around $0.009, or 9 cents.
However, the actual per question cost may be lower than this. For example, if ChatGPT is able to reuse some of the computing resources from previous questions, then the per question cost will be lower. Additionally, if ChatGPT is able to be more efficient in its use of computing resources, then the per question cost will also be lower.
“…around $0.009, or 9 cents”
I keep seeing articles about how AI will revolutionize the accounting profession.
Moore’s law is going to run out of steam in the middle of this decade (and may already have run out of steam—Nvidia’s Jensen Huang, who would probably know, pronounced it dead in 2022) because of limits imposed by physics— you can only cram so many circuits into a silicon wafer. We’ll still get improvements in compute from improved parallelization and specialized chip designs optimized for performing specific tasks, but the late 20th century’s exponential improvements can’t scale.
I think that with the right set of optimizations, running LLM instances will be cost-effective for a lot of tasks, but the “just 10x the number of parameters” strategy for improving performance will stop being viable (because of both training and operating costs), so we’ll probably see a ceiling on their sophistication until there’s some sort of major paradigm shift. At the moment, I think LLMs are on track to be a useful and commercially important but not world-shattering tech.
All way beyond my pay grade, but I keep reading claims we'll be able squeeze more time out of Moore's law because of better software and better materials. Also quantum computing?
QC is completely irrelevant here, anyone who says this is BSing. Software is sort of orthogonal to Moore - the point of Moore* is you don't need to pay programmers to optimize things, everything gets better automatically
*Moreso Dennard but whatever
"deep, deep loss" seems extreme. The market has baked in reaching operating profitability this year.
This is the way I've been operating for some time. I'm quite fluent in Spanish but when I have a long or complicated letter or document I need to write, I draft it first in English, Google it into Spanish, and then revise.
This is how I have been experimenting with ChatGPT. I obviously correct the things that are factually wrong, but I have also found the generated text to be way too flowery and corny for my style. But this has got me thinking that maybe my New England bred spareness in my writing may not be the most effective.
Have you tried asking it to write on a different style?
On what basis would you assume that ChatGPT’s style is better than yours? That seems like a massively risky model to imitate.
Probably just in the same way that, seeing a draft written by someone else, I can see the differences between how they put it and how I did, and in many cases see why their way of putting it might be more effective, even if I wouldn’t have thought of it that way.
Sure, but SD explicitly says they don’t like the machine style and are considering imitating it anyway. That strikes me as deeply misguided.
I was interpreting that as saying that they disliked 90% of it but imitated the other 10%, but that could be a misinterpretation.
The taxi cab analogy would seem to undermine the “don’t worry too much” thesis--a lot of cabbies ended up committing suicide after the arrival of smartphone apps! Taxi medallion debt might be considered a special case, but a lot of knowledge workers enter the market deeply in debt from college loans.
What I wrote was "workers in other industries don’t need to worry about AI taking over their jobs overnight." Certainly I think it's reasonable for workers to worry a little bit and to plan accordingly. And yes, I think people should think twice before accumulating a lot of debt, though I don't think the value of a college degree is going to crash the way a taxi medallion's value did.
Agree that most college graduates shouldn’t worry, though I do wonder about translators. The AI shock may not happen all at once, but the tech also keeps getting better. An industry can appear to be declining slowly and then experience something like a “cliff moment.” Probably in this case it will take care of itself through attrition--hard to imagine many people are entering college today with the express ambition of becoming a translator.
Technology always causes this sort of upheaval - there were a heck of a lot more railroad employees before cars came around. No one is owed a perfectly stable & perfectly secure career, and certainly no one is owed a protected sinecure when technology has made their job unnecessary.
The two situations you've mentioned, where a personal debt load makes the loss of career particularly difficult, are not the fault of technology at all, but rather of some unusual societal frameworks. Technology can hardly be blamed for stressing those already-odd frameworks.
Another thing is...there are options. While student debt can't be purged in bankruptcy, taxi medallion debt certainly can. Now bankruptcy does take a toll on your life - but it's not like there's no way to get out of that debt but suicide. We don't live in Dickensian Britain.
Even with student debt, what happens if you don't pay? Your credit score tanks and they'll start docking your paychecks. So what? It sucks, but it's not like they come harvest your kidneys to recoup the money. My grandfather got an MBA in his 40s and never paid a cent back he didn't have to. When he died, the poor bastards were still docking his social security checks for the student loan debt, and hadn't recouped even 20% before he kicked it. There's a way to die "up."
The thing about these kinds of technologies is that they help people with fewer skills and they hurt people with specialized skills. Unfortunately, we know that most knowledge workers have an above average skill level relative to their peers in knowledge work, so the adjustment is going to be quite rough.
Accepting arguendo that the number of cabbie suicides triggered by competition from Uber, etc. qualifies as "lots" (this is one of these things where it is completely unclear what the "normal rate" is, i.e., how many cabbies commit suicide over a comparable length of time when ridesharing smartphone apps weren't available, so we can't actually say whether there's been a statistically significant increase), it appears that the victims were overwhelmingly, if not exclusively, immigrants to the US: https://www.nytimes.com/2018/12/02/nyregion/taxi-drivers-suicide-nyc.html
Given the paucity of native-born Americans committing suicide in this context, it seems reasonable to think that the suicides were driven as much by a lack of understanding of the US bankruptcy system, US tax system, and/or possibly fear that loss of their business might not just have financial consequences, but lead to them being deported if they aren't naturalized US citizens. I'm skeptical the same pressures apply to most knowledge workers, especially when income-based repayment plans for student loans and public service loan deferrals/forgiveness are already an established thing.
I don't want to claim (and don't think I did claim) that nobody is going to face hardship as a result of AI progress. Economic change is always stressful for people and I don't want to minimize that. I just think a lot of people are overestimating its likely severity.
Some issues with these two successive paragraphs:
"“If you put ‘trust’ in ChatGPT it's going to translate it to confianza,” Leon said. “But that's not what it means.” In reality, Leon says, there are 20 or 30 different ways to translate the legal concept of a trust to Spanish. Figuring out which meaning to use in any given sentence requires a sophisticated understanding of American law, Spanish law, and the context of the specific document she is translating.
It’s a similar story to Marc Eybert-Guillon’s work localizing video games. His firm provides translation for a wide variety of text, from character dialog to the labels on in-game items like weapons or magic potions. Often he needs to translate a single word or short phrase — too little context for an unambiguous translation."
These people evidently don't know how to use ChatGPT properly. You can perfectly well ask it something like, "translate [single word] in the context of xyz" and it will give you a more accurate translation than just asking it to translate [single word] without context, as you might expect. Not that this makes it okay to use it for legal documents as I'm sure it isn't perfect - and when you're translating into a language you don't know yourself, you don't know if it's correct. Still...what these people are saying isn't really accurate.
As an example, I recently saw in a store a new Pringles flavor called "Las Meras Meras Habaneras." I wanted to know what this translated to - obviously something about habaneros, but I couldn't find anything for "meras meras" on google, other than "meras" meaning "mere", which doesn't seem quite right in that context. I couldn't find anything for "meras meras" as a Spanish idiom at all, other than a few seemingly-similar uses.
I asked ChatGPT to translate "Las Meras Meras Habaneras" and it told me it couldn't, because the phrase was not a logical statement in Spanish. Then I added, "For context, the term is a new flavor of Pringles."
ChatGPT promptly replied, "In that case, since "meras" means "mere" and also has the sense of "pure" or "true", the phrase likely means something close to "The true habanero flavor.""
ChatGPT can perfectly well take different context instructions and use them to modify its output. Maybe I should make a career of teaching people how to properly prompt ChatGPT.
I don’t see how this is relevant here. The context for the legal translation is some substantial part of the whole corpus of Spanish and English law, including professional know-how that isn’t explicitly written down anywhere. I can believe you can get ChatGPT to give you a list of all the ways to translate “trust” into Spanish, but not at all that you can get the legally correct one without writing a prompt just as expert as the human translation.
As I said, a legal context requiring very specific correct terminology is different - but it's not correct to say that ChatGPT can't understand context to some degree. It can, it just won't be 100% tight 100% of the time.
Certainly for something like translating items in a video game, if you tell it to translate "stick" with no context, it could say "adhere" as easily as "a small piece of wood", but if you give it the context that it's something someone is holding and hitting things with, it's probably going to give you an accurate translation of "stick" for that context. Probably.
But all you'd need to translate an English text fluently - if not precisely - into Spanish is ChatGPT plus a Spanish speaker to check that the translated terminology is correct in context. You don't actually need anyone fluent in both languages - just someone fluent in the target language. If there's ambiguity, and the Spanish speaker wants to ask the English writer to clarify something, they can most likely use ChatGPT to translate their communications as well.
Sure, the output isn't guaranteed to be perfect - which means it isn't suited to a legal context - but "mostly perfect" is certainly good enough for a task like translating a video game, where exactly-correct language is rarely crucial. Heck, I sometimes play video games in German (which I don't speak) just for kicks, and it doesn't make it that much harder to understand what's going on.
I don't understand why this process of "gather necessary context, give it to ChatGPT, take the output and share it with a native Spanish speaker who isn't a translator, possibly have a back-and-forth with the Spanish speaker to figure out the best word" would be more efficient than paying a professional translator. It sounds like you're replacing one work with two for no good reason.
There are a lot more people who speak one language than two, so their labor will be less expensive, for one thing - especially in the context of translations between, say, Finnish and Chinese, where bilinguals are a lot rarer than English-Spanish.
This is assuming that the double-checking is even necessary, which outside of a legal context, it usually isn't. In the context of a video game with a medieval fantasy setting, for example, you would say to ChatGPT, "Given the context of a video game in a medieval fantasy setting aimed at teens, translate the following list of strings", and you would probably get context-correct output in most cases.
A bigger concern than accuracy, I think, would be tone, but you could control that as well by asking for translation "in the style of Gabriel Garcia-Marquez" or "in the style of Miguel de Cervantes", as appropriate for the video game or other piece of media in question.
This would probably cover all the needs of someone making a low-budget video game for whom saving on translation while gaining access to other-language markets, at the possible cost of some accuracy, would be an excellent trade. For a bigger-budget video game, where a large developer wants to be careful to avoid any output that could be perceived as insensitive or problematic, you'd probably want the output read over by native-language speakers even if you did pay for a professional translation, since translators themselves aren't perfect. So in either case ChatGPT is at least as useful as a human translator, and cheaper - and not limited to one or a few languages as most human translators are.
So the thesis is that this is merely the next step of automation. Nothing more nothing less. If so it’s still pretty bad for the workers imo, for reasons others laid out. However I’d like to point out something else. What automation does- I think- is hollow out the middle ground. It takes a service previously available to the middle class, or to the average person in moderate amounts, and converts it to a shittier but good-enough product available in abundance, whereas the previous, higher-quality product which was “normal” for middle class people becomes a status symbol of the super rich. There are pros and cons for this process but we certainly lose out something, esp the “we” who are in the middle and upper-middle (but not super rich) strata of society.
What do you mean by "previously available to the middle class?" I don't see any reason that the introduction of AI would dramatically raise the cost of an old-fashioned translation. It just makes good-enough AI cheap enough that the high cost of a human translation is no longer worth it for most purposes. But people can still pay it if they want to.
Isn’t that what follows from your own prediction? If translation will die as a massive industry it will be much harder for the middle class person to find a human translator at a decent price, especially one who knows what they’re about. It’s easy to buy pretty good furniture from ikea but I bet it’s far more expensive to have custom made high quality furniture by a highly skilled carpenter than it was a century ago. Cooking for yourself at home and eating out have both become cheaper thanks to technology but how many middle class people can afford their own full time cook? Having a private secretary, fully qualified to write your correspondences etc used to be standard for professionals. Doubtless far more people have access to chat gpt and phone calendars than ever had to secretaries, but a highly skilled, human, “personal assistant” (or however they’re called nowadays), which is still obviously better than all the tech put together, is on the trend to becoming the mark of the upper echelons Etc
P.S.
I’m not nostalgic for the past. I think that- thus far at least- rise in productivity has been a *net* good on the long term. All I’m pointing out is that even if it’s a net good for society, some of us are permanently losing out on some things, and that’s true even from the consumer’s perspective, not just the worker’s.
It's more expensive to hire a carpenter than 100 years ago because wages in general have gone up, and so carpenters' incomes have gone up along with everyone else's. Ditto for cooks, secretaries, etc. But this isn't because living standards have gone down. Quite the contrary. The "middle class" of 100 years ago that could afford maids and cooks were in the top 5 percent if not the top 1 percent of the income distribution. They were able to afford this kind of labor because most people had very low wages and so there was a lot of surplus labor around.
The people we call the middle class today are a completely different slice of the income distribution, from say the 40th to 90th percentile. People in that portion of the income distribution in 1923 would not have had servants. They just had a much worse standard of living due to the lack of washing machines, vaccuum cleaners, Ikea, etc.
So yes, rising wages have made life worse in some ways for people in the 95th to 99th percentile of the income distribution because they have to get by with fewer servants. But it's been good for people in the bottom 90 percent of the income distribution who couldn't afford servants in 1923 and can't afford them now. Overall it seems like clear progress to me.
Did you read my ps before responding ? Also- I’d like some data on the percentages in 1923 if you have it. Thanks!
In 1920 only 22 percent of white people aged 25 to 29 had high school diplomas and only 4 percent had a college degree. So people look back, read that that it was common for people with college degrees to have servants, and conclude that living standards have fallen. But in reality people who had college degrees in 1923 were a totally different slice of the income distribution than people with college degrees today. Having a college degree in 1920 made you a member of a tiny elite, and they could easily afford servants because pay for non-college graduates was very low.
Today, many more people have college degrees (35 percent of all young American adults) and median wages are a lot higher. So unsurprisingly most college graduates today can't afford to hire servants. That's because prosperity is far more widely shared than it was a century ago.
You’d notice that you didn’t answer my question, unless you assume that *only* people with college degrees had servants? Anyway you keep harking back to an uncontested point (aka straw man). I ask again, did you read my ps? We’re in agreement that society today is better off. That’s hardly the point.
Most, or perhaps all of your examples are cases where the middle class was priced out by Baumol's cost disease, which is triggered by productivity improvements in _other_industries.
"I bet it’s far more expensive to have custom made high quality furniture by a highly skilled carpenter than it was a century ago."
Is this actually true in real terms? I think the difference is that 100 years ago you could buy really crappy products or super expensive well made products, but there wasn't very much decent stuff in the middle.
You can still buy the super expensive stuff now, but most people don't want to. Similar to other parts of fashion, people now often will want to switch to new looks or designs after a decade or so instead of buying super expensive furniture and having it for life.
But, in order for translation to die out, that means AI translation has to get a lot better.
It won't be "you have to settle for current gen AI translation because you can't afford one of the translator specialists that you previously could have afforded", it'll be some much better translation.
Basically, when you say: " It takes a service previously available to the middle class, or to the average person in moderate amounts, and converts it to a shittier but good-enough product available in abundance"
If it was good enough to kill most of the translation industry then I'm not convinced it will be meaningfully worse than what you could afford now, so I think the word "shittier" is wrong there.
It will be “good enough” but not quite as good, and we’ll get used to it, like in so many cases.
I work in video games and the breakneck speed at which many of the web3 crowd jumped on to the AI bandwagon tells me there is more than a little scamming going on at the moment.
It is changing the industry but it’s far less capable than a tech bro would enthusiastically tell you over coffee at Urth cafe.
The generative voice stuff is hilariously bad at times. I did a demo at GDC where the guy kept telling me “oh don’t talk to that guys, he’s not working right now” sounds like a compelling and rich world you built there partner!
I know it’s improving and maybe it does take a bunch of jobs but the disconnect from where it is to what they are currently promising is pretty massive.