How AI is affecting productivity and jobs in Europe
cepr.org146 points by pseudolus 13 hours ago
146 points by pseudolus 13 hours ago
> The EU trails the US not only in the absolute number of AI-related patents but also in AI specialisation – the share of AI patents relative to total patents.
E.U. patent law takes a very different attitude towards software patents than the U.S. Even if that wasn't the case: “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing? Not something you can just throw out there as a presupposition without explaining your reasoning.
EU firms don't file EU patents necessarily, but rather in the relevant countries (including USA).
> “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing?
I don’t think the authors claim we should have 100% specialisation. They just say that the fact that the EU has fewer AI-related patents as a proportion of the total (less specialisation) is evidence that it is behind in AI. That seems reasonable.
Makes me wonder how AI will influence the work of patent officers.
Perhaps it will make patent trolling a bit harder because it is easier to look up existing work and to check if an idea is obvious?
> Perhaps it will make patent trolling a bit harder because it is easier to look up existing work and to check if an idea is obvious?
Haha, funny :)
No, it'll be like the rest of the industries that use more AI, they'll spend the same amount of effort (as little as possible) and won't validate anything, and provide worse service, not better. AIslop is everywhere, and seemingly unavoidable for companies to use more and more to cut more corners.
The validation point is real. We tested this with AI presentation tools specifically - gave 6 of them the same prompt and fact-checked every claim against primary sources. Best accuracy was 44%. Most were under 20%.
The pattern was consistent: the tools produce confident, well-formatted output that looks thoroughly researched. But more than half the statistics were either distorted or completely fabricated. The worst part was finding the same fake stats appearing across multiple tools - not because they independently verified anything, but because they all absorbed the same bad data from training.
The productivity gains from AI are real, but so is the validation cost. People just aren't accounting for it.
I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
The turning point was around when google stopped honoring Boolean ops and quotation marks
When did this happen? I do exact searches on Google almost every day and it seems to honor the quotation marks just fine for me.
At least 2022, if not earlier.
The thread seems to be about the opposite problem. The OP can't find the page they're looking for because Google is too strict about whitespace, according to the top comment.
The killer app for AI might just be unenshittifying search for a couple of years.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
I kind of want to become Amish sometimes.
AI automates spam generation so more than likely all hope is lost for the human driven web.
Only the corporate web which values quantity over quality. Have you tried Marginalia Search? It's refreshing, although it doesn't index enough stuff to find what you're looking for, most times.
Too much money in ads, and search is just a huge cash pipeline straight towards it. No way we can have non-ad-infested llm search out in the wild from any major vendor in upcoming future. Google-fu just becomes llm-google-fu, while sometimes it goes off rails and then apologizes in that typical super annoying way (and screws up something else).
Maybe smaller ones can somehow provide almost comparable but ad-free service, heck even mildly worse but genuine results would win many people over, this one included.
Allegedly the 'clear' answer is much easier to manipulate than gaming PageRank ever was:
Don't think that is a fair point, the manipulation was done on a topic of which there are hardly any other sources (hot dog eating competition winner). If you want to manipulate what an AI tells you is the F-150 street price, you will complete with hundreds of sources. The AI will unlikely pick yours.
The marketing game is already moving to game LLMs. Somehow you have to get what you want to have into the training data or the context window.
Currently it is probably just mostly quantity that does the trick w.r.t. training data. So e.g. spam the Internet with "product comparisons" featuring your product as the winner.
Shifting the balance on training data seems like the wrong approach vs focusing on showing up in agent search tool results and swaying them there.
It’s been a long time since agents couldn’t even conduct web search and could only riff off their model. But the examples in this thread are things an agent would search for immediately, and agents are leaning harder on tool calls and external info over time, not less.
I used to be able to google a question like that and get an accurate answer within the top 3 results nearly every time about 20 years ago. Then it got worse and worse and became pretty much completely useless about 10 years ago.
Now AI will give me a confident answer that is outright wrong 20% of the time or kind of right but not really 30% of the time. So now I ask something using an AI chatbot and carefully word it so as to have it not get off topic and focus on what I actually want to know, wait 30 seconds for its long ass answer to finish, skim it for the relevant parts, then google the answer and try to see where the AI sourced its answer from and determine whether it misinterpreted/mixed up results or it's accurate. What used to be a 10 second google search is now a 2-3 minute exercise.
I can see very much how people say AI has somehow led to productivity losses. It's shit like this, and it floods the internet and makes real info harder to find, making this cycle worse and worse and take more and more time for basic stuff.
Web scraping for LLMs has almost completely ruined the search experience. In the past I could search for simple questions, and quickly get an answer without even having to click through to the links.
This was horrible for web traffic, but the utility level was off the charts. It was possible to get accurate results in milliseconds. It was faster than using an LLM.
Now sites put almost no info in the search result headers, to get people to click through. I think this will work on some users, but most will start using LLMs as search by default.
Search engines have gotten so bad that I almost feel forced to try running SearXNG or some other search engine locally. Its a pain to set up, but degooglefication is always worth it.
Now Google has an AI answer at the top with links to sources. This streamlines the process.
My mother lost her phone so I asked her to search for "find my iphone" on Google.
The result started with 3 "sponsored links" which threw her down the rabbit hole.
This used to be easy.
I was just thinking exactly the same. Basic web search has become so horrible that AI is being used as its replacement.
I found it a sad condemnation of how far the tech industry has fallen into enshittification and is failing to provide tools that are actually useful.
> Basic web search has become so horrible
It is not horrible, it reached the point of absolute excellence. Not for you, the user - but for making money for the creator. Remember, no one paid for web search, so you are the product. If you are the provider of the web search engine, the point of having web search is not deliver the best search result to the user, but maximize the amount of money you can make from the sum of the world population. And google did very good in maximizing their profits, without users turning away from them.
We always had the technology to do things better, it's the money making part that has made things worse technologically speaking. In this same way, I don't see how AI will resolve the problem - our productivity was never the goal, and that won't change any time soon.
And it'll happen again when AI models start resorting to ads once again.
I don't think that will ever happen. All you need is a trivial browser extension with a locally run, very primite LLM, that takes the output of the commercial LLM, and removes all advertisement. And adBlocker AI, so to speak.
Yes, there will be people not using adblockers just as there are people today. But no adblocker ever was able to remove SEO spam from googles website, all they did was hiding obvious adds. They didn't improve the search experience.
Yup. Any LLM recommendation for a product or service should be viewed with suspicion (no different than web search results or asking a commission-based human their opinion). Sponsored placements. Affiliate links. Etc.
Or when asking an LLM for a comparison matrix or pros and cons between choices ... beware paid placements or sponsors. Bias could be a result of available training data (forgivable?) or due to paid prioritization (or de-prioritizing of competitors!)
Eager to see how that will work with existing laws. At least in a lot of countries in the EU, any advertisement has to be explicitly marked as such. Sponsored content, too. So the AI will have to highlight that.
then declined as sponsored results and SEO degraded things
It didn't decline because of this. It declined because of a general decade long trend of websites becoming paywalled and hidden behind a login. The best and most useful data is often inaccessible to crawlers.In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
> The best and most useful data is often inaccessible to crawlers.
Interesting point.
> ost open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc
Ironically isn't one of the reasons some of those platforms started to use logins was so they could track users and better sell their information to ad people?
Obviously now there are other reasons as well - regulation, age verification etc.
Does this suggest that the AI/ad platforms need to tweak their economic model to share more of the revenue with content creators?
I seem to remember very few ads on the early web. Most sites I frequented were run by volunteers who paid out of their own pockets for webspace.
> I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Used to be.
> Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
Now.
FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
I think people want to read how AI is not working , so those are the articles that are going to get traction.
Personally, I don't think the current frontier models would help the company I work for all that much. The company exists because of the skill in networking and human friendships. The company exist in spite of technological incompetence.
At some level of ability though, a threshold will be reached and a competitor will eat our lunch whole by building a new business around this future model.
It is not going to be a % more productive than our business. It is like the opposite of 0 to 1. The company I work for will go from 1 to zero really quick because we simply won't be able to compete on anything besides those network ties. Those ties will break fast if every other dimension of the business is not even competitive and really in a different category.
> Rollout hasn't even begun yet which you can
If rollout at Deloitte has not yet begun... How on earth did this clusterfuck [0] happen?
> Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.
[0] https://fortune.com/2025/10/07/deloitte-ai-australia-governm...
Because even if an organisation hasn't rolled out generative AI tools and policies centrally yet, individuals might just use their personal plans anyway (potentially in violation with their contract)? I believe that's called "shadow AI".
Correct. Where I work we are only "allowed" to use AI since December 2025.
But obviously people were copy/pasting content to ChatGPT and Claude long before that.
Haven't even read the source, but I like how it's "a partial refund". The chutzpah to deliver absolute nonsense[0] and then give a partial refund!
[0]: If it contains references to nonexistent papers and fabricated quotes, the conclusions of the report are highly doubtful at best.
Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
Agreed. We've been on the agentic coding roller coaster for only about 9-10 months. It only got properly usable on larger repositories around 3-4 months ago. There are a lot of early adopters, grass roots adoption, etc. But it's really still very early days. Most large companies are still running exactly like they always have. Many smaller companies are worse and years/decades behind on modernizing their operations.
We sell SAAS software to SMEs in Germany. Forget AI, these guys are stuck in the last century when it comes to software. A lot of paper based processes. Cloud is mainly something that comes up in weather predictions for them. These companies don't have budget for a lot of things. The notion that they'll overnight switch to being AI driven companies is arguably more than a bit naive. It indicates a lack of understanding of how the real world works.
There are a lot of highly specialized niche companies that manufacture things that are part of very complex supply chains. The transition will take decades, not months/weeks. They run on demand for products they specialize in making. Their revenue is driven by demand for that stuff and their ability to make and ship it. There are a lot of aspects about how they operate that are definitely not optimal and could be optimized. And AI provides plenty of additional potential to do something about it. But it's not like they were short of opportunities to do so. It takes more than shiny new tools for these companies to move. Change is invasive and disruptive for these companies. And costly. They take the slow and careful perspective to change.
There's a clean split between people that are AI clued in and people working in these companies. The Venn diagram has almost no overlap. It's a huge business opportunity for people that are clued in: a rapidly growing amount of people mainly active in software development. Helping the people on the other side of the diagram is what they'll be mostly doing going forward. There's going to be a huge demand for building AI based stuff for these people. It's not a zero sum game, the amount of new work will dwarf the amount of lost work.
Some of that change is going to be painful. We all have to rethink what we do and re-align our plans in life around that. I'm a programmer. Or I was one until recently. Now I'm a software builder. I still cause software to come into existence. A lot of software actually. But I'm not artisanally coding most of it anymore.
I'm not sure this is even measuring LLMs in the first place! They say the definition is "big data analytics and AI".
Is putting Google Analytics onto your website and pulling a report 'big data analytics'...?
Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.
What do you mean? Deloitte has been all in on Microsoft AI offerings for quite some time, people have access to a lot of AI tools through MS.
Did they communicate this from the top or just turn a blind eye to it?
They had official trainings on how to use Copilot/ChatGPT and some other tools, security and safety trainings and so on, this is not some people deciding to use whatever feature was there from Ms by default.
OpenAI is buying up like half of the RAM production in the world, presumably on the basis of how great the productivity boost is, so from that perspective this doesn't seem any more premature than the OpenAI scaling plan. And the OpenAI scaling plan is like all the growth in the US economy...
4% isn’t failure! A 4% increase in global GDP would be a big deal (more than what we get in a whole year of progress); and AI adoptionis only just getting started.
Yeah. We are only just beginning to get the most out of the internet, and the WWW was invented almost 40 years ago - other parts of it even earlier. Adoption takes time, not to speak of the fact that the technology itself is still developing quickly and might see more and more use cases when it gets better.
> We are only just beginning to get the most out of the internet
The Internet has been getting worse pretty steadily for 20 years now
> We are only just beginning to get the most out of the internet
"The Internet" is completely dead. Both as an idea and as a practical implementation.
No, Google/Meta/Netflix is not the "world wide web", they're a new iteration of AOL and CompuServe.
Looking at the study, +4% is what they get when they chose to adopt AI, not overall.
As a counter-point, someone from SAP in Walldorf told me they have access to all models by all companies to their choosing, at a more or less unlimited rate. Don't quote me on that, though, maybe I misunderstood him, it was in a private conversation. Anyway, it sounded like they're using AI heavily.
Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT
These are not the openclaw folks
What does it even mean to specialise in something and know so little about it? What exactly is this BA person doing?
Genuinely confused, I don't get it
The “corporate” in “corporate AI” can mean tons of work building metrics decks, collecting pain points from users, negotiating with vendors…none of which requires you to understand the actual tool capabilities. For a big company with enough of a push behind it, that’s probably a whole team, none of whom know what they are actually promoting very well.
It’s good money if you can live with yourself, and a mortgage and tuitions make it easy to ignore what you are becoming. I lived that for a few years and then jumped off that train.
Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.
For those hearing this at work, better prepare an exit plan.
Apropos, I once had a boss who said he was running a headcount reduction pilot and anyone who had the time and availability to help him should email him saying how much time they had to spare. I cannot deny this had a satisfying elegance.
Except it is a horrible metric to determine who is the least effective in an org and should be cut.
I've all-ways asked the managers can you kindly disclose all confidential business information. In which they obviously respond with condescending remarks. Then I respond with, then how am I going to give you a answer without all the knowledge of how the business runs and operates? You can go away and figure out what is going to make work for the business then you can delegate what you want me to do, it is the reason why you pay me money.
I know at least two different companies in Italy that are very hard on shoving NotebookLM and Gemini down their employees (not IT companies, talking banking/insurance/legal).
Which for the positions/roles involved does make some sense (drafting documents/research).
But it seems like most people are annoyed, because the people shoving those aren't even fully able to show how to leverage the tools, the attitude seems like "you need to do what you do right now under lots of pressure, but also find the time to understand how to use these tools in your own role".
Why is it depressing? Personally, unless the alternative is literally starving, I wouldn't want to do a job that a robot could do instead just so that I could be kept busy. That sounds like an insult to human dignity tbh.
You know what is an insult? Supermarket on my street putting on display sloppy ads with ramen bowl that has 3 different thickness chopsticks and cartoon characters with scrambled faces. Now that is an insult, because there was a human being doing that job, and I am sure there was a great "productivity boost" related to that change.
I am a heavy AI user myself, and sure as hell I am not putting my foot in that place again.
Dignity has no calories, though.
Yeah but it's the job of the elected governments to build and maintain housing, education, social and welfare systems for their population that keep up with the challenges of the times, not the responsibility of the private sector to hold back progress and inefficiency just so more people can stay in employment even if they're not needed anymore.
The governments however have been and continue to be ill prepared to the rising increases of globalisation labor offshoring and automation.
There was a news article yesterday in my EU country about a 50 year old laid off CEO of a small company that continues to be unemployed after a year because nobody will hire him anymore so he lives off welfare and oddjobs and the government unemployment office has no solution.
What happens in the future when AI and offshoring culls more white collar jobs and there will be thousands or tens of thousands of unemployable 50 year old managers with outdated skills that nobody will want to hire or re-train due to various reasons, but they still need to keep working somehow till their 70s to qualify for retirement? Sure you then go to re-train yourself to become a licensed plumber or electrician, but who will want to hire you to gain experience when they can hire the 20-something fresher rather than the 50 year old with bad knees?
Governments are not prepared for this.
> but it's the job of the elected governments to build and maintain housing, education, social and welfare systems for their population that keep up with the challenges of the times
I'd say those things are the job of the population itself, via a wide range of pluralistic institutions. The job of governments, which are just specific organizations within a much larger society, is primarily to maintain public order.
>I'd say those things are the job of the population itself, via a wide range of pluralistic institutions.
I'd agree ONLY IF I'd pay no taxes to the government. But since most middle class people pay 40%+ of their income to the state, then the state now has the responsibility to handle those challenges.
But if the state wants me to handle it, then sure I'd do it gladly, they just need to reimburse all my tax payments so I'd have the financial resources to invest in my future.
But right now we have the worst of both worlds, a huge tax burden on the middle class funding an incompetent state that takes your money and just tells you it's your fault when you fail instead of using your money for societal wide solutions.
Is it an insult to human dignity? Let’s go through the thought process.
Commodities are used in an enterprise. Some of the commodities are labor. That labor commodity does work. Involving automation. Eventually (so we are told) those labor commodities manage to automate some forms of labor. Making those other labor commodities redundant.
The labor commodities are discarded. Because why (sigh) use a cart when you now have a car? And you don’t even own a horse.
All of the above is presumably not an insult to human dignity. No. The insult to human dignity is being “kept busy” instead of letting billionaires hoard automation made through human labor.
Of course the real solution is not busywork. But the part about busywork was not on the top of my mind with regards to dignity in this context.
> Personally, unless the alternative is literally starving,
To put a fine point on it, yeah? Ultimately.
That's how capitalism works. It doesn't matter if your job is useful but if you don't do anything, you don't get money.
More people without jobs will be a heavy burden on social security systems, so in the end it's literally about starving.
Assuming large-scale automation[1]: workers have in aggregate automated themselves. It takes labor to automate. And yet those former workers are now a “burden”? We’re assuming automation, so was the making of the food stuff, the transportation of the food stuff, the automation of the infrastructure maintenance... was that done or not? Where is the burden being felt?
You’re gonna call the people that built everything a burden?
Either we are talking in terms of propagandistic guilt assignment, or we’re talking realpolitics. Either:
1. we can trivially support the “burden” because of automation (no burden); or
2. billionaire resource hoarders (a burden?) do not need the vast majority of their underlings (maybe just a few for Epstein 2.0) and can let them fend for themselves or die off. (It’s literally not even a question of whether they have a big red Automation Button that would sustain the “burden” indefinitely. What incentive do they have to press it?)
[1] I notice scale is a favorite buzzword now
More jobless are a burden in a capitalism based social security system. Has nothing to do if those build something useful or not. Caputalism doesn't care.
In the end the upper 0.1% get the profit and those who still have jobs have to finance the social security systems. More jobless and less working means the jobless become a burden and in the long run the system will fail.
So you either need to tax automation or the rich. Guess if that will happen.
I know you are speaking in real terms about how the system works. But I don’t need to describe that system in such utterly system-serving (for lack of words) terms.
> In the end the upper 0.1% get the profit and those who still have jobs have to finance the social security systems. More jobless and less working means the jobless become a burden and in the long run the system will fail.
Who is the burden in that sentence?
"Ideas for AI to help reduce headcount" sounds like the title everyone should start using on resignation letters.
If anyone still resigns that is. They seem to have automated that too.
> Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.
If the manager doesn’t have ideas, it is they who deserve the boot.
I have a hard time understanding what "increased productivity by 4%" actually means and how this metric is measured. One low-digit does not seem high when put into the context and promises, is it?
I cannot read the paper that this article is based on, but it seems that it refers to the use of big data analytics and AI in 2024, not LLM. It concludes that the use of AI leads to a 4% increase in productivity. Nowadays the debate over AI productivity centers around LLMs, not big data analytics. This article does not seem to contradict more recent findings that LLM do not (yet) provide any increased productivity at the company level.
You know it's a EU study because they bring up "AI patents" in the first 2 minutes of it, as if they mean anything
What stands out for me is that the productivity gains for small and medium-sized enterprises are actually negative. But in Germany, for example, these companies are the backbone of the entire economy. That means it would be interesting to know how the average was calculated, what method was used, what weighting was applied, etc.
All in all, it's an interesting study, but it leaves out a lot, such as long-term effects, new dependencies, loss of skills, employee motivation, and much more.
Of note, "AI adoption" here means using "technologies that intelligently automate tasks and provide insights that augment human decision making, like machine learning, robotic process automation, natural language processing (NLP), algorithms, neural networks" and not just LLMs.
As far as I can tell "robotic process automation" mostly seems to be the deeply unglamorous process of building stuff to drive old applications that can't be given API access?
Is there a link to the actual paper anywhere? That seems like a rather large omission. Without the paper it's hard to tell what they are actually measuring.
The actual paper: https://www.eib.org/files/publications/20250383-130126-econo...
Use of AI is based on self-reported data. From the paper:
> The respondents to the interviews are senior managers or financial directors with responsibility for investment decisions and how investments are financed – for example, the owner, chief financial officer or chief executive officer
> Firms are asked the following question: “To what extent, if at all, are big data analytics and artificial intelligence used within your business? A. Not used in the business. B. Used in parts of the business. C. Entire business is organized around this technology.”
AI adoption is defined as the manager answering B or C.
I'm doubtful that this data is going to be very robust. Some senior tech managers are very keen to talk about AI, while at the same time knowing little about how much AI is actually being used by workers. At other companies you'll have people using free or personal ChatGPT accounts without the knowledge of management.
Also "big data" is not exactly AI.
The productivity information is robust as it's based on company accounts, albeit from 2024 so a couple of years out of date now.
This is tongue in cheek but my point is the behavior of these companies, their relentless PR, and the looming liquidity crisis they are causing seems like a coordinated plan. Consumer confidence is certainly being crystalized by rumors of all kinds and businesses are made up of consumers. If the fact checkers are LLMs themselves how does one even begin to figure out the truth?
This is just a little wikipedia adlib I did to illustrate my point. (double posted)
"The Phoebus.AI cartel was an international cartel that controlled the manufacture and sale of computer components in much of Europe and North America between 2025 and 2039. The cartel took over market territories and lowered the useful supply and life of such computer components, which is commonly cited as an example of planned obsolescence of general computing technology in favor of 6G ubiquitous computing. The Phoebus.AI cartel's compact was intended to expire in 2055, but it was instead nullified in 2040 after World War III made coordination among the members impossible."
[dead]
[dead]
[flagged]
that's not what the article said, not even close, not sure why you need to push this emotional and wrong framing.
[flagged]
You trust these stochastic text/slot machines for scheduling and follow-ups? Human intention is important for both of these. triage and reminders I can see, but if you send me an llm generated follow up, I'm just going to assume you don't care.
> if you send me an llm generated follow up, I'm just going to assume you don't care.
Ironically you just replied to an automated message on a forum and didn't realise :) (hint: click on the user, go to their comment history, you'll see the pattern)
Yes. Other humans are generally accepting of mistakes below some frequency threshold, and frontier models are very robust in my experience
Have fun annoying a ton of people, and also, getting prompt injected on a weekly basis and leaking who knows what from your inbox.
One process redesign that may be considered a moat for AI is employees intending to communicate through a sentence or two first passing the text into their AI of choice and asking it to elaborate. On the other end the colleague uses their AI to summarize the email into a bullet point or two. It's challenging for those that don't use AI to keep up.
Imagine explaining AI to 1997 you.
"It's like PKZIP, but backwards"
Easy - "It's like in the movie, but the voice is actually human like rather than robotic."
AI is affecting everything the same as Covid, as we've been in one single-topic hysteria since 2020. With one short break for attaching bottle caps to their bottles.
Not even Russian invasion or collapse of their automotive industry rattled them.
Hey! Don't make fun of us! You'd get used to the bottle caps too. Not really that unnoying, except for smoothies, where a bit of the smoothie drips down on the bottleneck and makes everything sticky.
And about the reaction time - politics is in a way expression of the will of the masses. And that depends on how they are informed. They are maybe not yet on point, but they are getting there.
Nowadays people are slowly realizing that Merkel's "all refugees welcome" idea was stupid and can't work. Both ineffective as a means to help people - that's cheapest/most effective closer to their homes. And as immigration policy - getting more hands to work doesn't work with people who refuse to work and refuse to integrate. Part of that refusal comes from locals that are "pro immigrants" on social media, but refuse to live in same neighbourhoods as immigrants or hire them.
More and more people are also realizing that carbon neutrality was often green washing. People are waking up to the fact that execution of good willing ideas with disregard for economic circumstances, or just reality in general, doesn't have good results.
Only the Russian threat is not being realized soon enough. It's not like during cold war, Russia doesn't have the conventional army to conquer even the westernmost tip of Spain.
Or we think it doesn't? Look at how two Ukrainian drone squads owned two NATO battalions of combined tanks & mechanized infantry in latest war exercises. With no losses on their side. We can infer that russian capabilities are more or less similar.
But that drone event aside, the perception of threat in Europe is not uniform. Russia will likely only take everything east of Germany, (at least first, before rebuilding and attacking again). So Italy or Spain won't reduce their social spending to buy/ invest in defence to protect Poland, Czech or Romania.
There also is a slowly forming pro-russian coalition of Slovaks and Hungarians. Who still, to this day, keep buying Russian oil. Yeah, when Putin is laughing that Europe is still buying Russian oil, he's not mentioning that it's just the pro Russian Hungary. Whos prime minister, Orban, is now in threat of loosing next elections. But the USA support is there already, Rubio is helping Orban in campaign for 12th of april elections.
[sarcasm]So to sum up. Yeah, Europeans are waisting energy on bottle caps. But USA is funding pro-russian parties and helping pro-russian politicians. We in Europe can only hope that if Russia attacks, USA will not join the war, because it's becoming quite evident on whos side USA would fight. [/sarcasm]
> And about the reaction time - politics is in a way expression of the will of the masses.
Then you go on to list all the astroturfing that people are “waking up to”. You just contradict yourself. Politics writ large is astroturfing that you get gaslit into thinking is da will of da masses.
> And that depends on how they are informed. They are maybe not yet on point, but they are getting there.
But you have the uncorrupted view from God.
x
Big companies are surprisingly nimble when it comes to AI.
They typically white label Azure LLM offerings or use Github Copilot Enterprise and sign everyone up wholesale.
Some with competent IT dept wrote their own router and offer multiple models from multiple vendors and present it as "<company name> chat".
Not in EU. There is a sacred process that has to be followed that can take months even to flip a switch.
I work in a big corporation in Europe. Officially we're only allowed to use CoPilot, but a lot of people just have their own subscriptions. Management either turns a blind eye or is actively encouraging investigating other AI solutions. Of course, people need to take care of confidentiality, data protection and all that, but a lot of work is just not affected by those concerns.
> fintec startup in Berlin,
> This was mostly due to second line pushing back because of data protection, data privacy and all other regulatory requirement and bureaucratic paperwork
Fintec startup. Fintech. Handling people's money. Handling a lot of extremely sensitive data. Complaining that they have to deal with some "bullshit bureacracy about the things like privacy and data regulations or something".
Really? Really?!