It’s been a very hard year
bell.bz428 points by surprisetalk 2 days ago
428 points by surprisetalk 2 days ago
I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...
I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
You might as well work on product marketing for ai because that is where the client dollars are allocated.
If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.
Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals. I am sure you could find a lot of moral values you would simply refuse to compromise on for the sake of business. the line between moral value and heavy preference, however, is blurry - and is probably where most people have AI placed on the moral spectrum right now. Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.
I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.
What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.
the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?
Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.
Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get
> What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations.
The main difference is that for those devices, the people negatively affected by operations are far away in another country, and we're already conditioned to accept their exploitation as "that's just how the world works" or "they're better off that way". With AI, the people affected - those whose work was used to train, and those who lose jobs because of it - are much closer. For software engineers in particular, these are often colleagues and friends.
>> The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain?
That's very similar to other unethical processes(for example child labour), and we see that government is often either too slow to move or just not interested, and that's why people try to influence the market by changing what they buy.
It's similar for AI, some people don't use it so that they don't pay the creators (in money or in personal data) to train the next model, and at the same time signal to the companies that they wouldn't be future customers of the next model.
(I'm not necessarely in the group of people avoiding to use AI, but I can see their point)
> Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals.
Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.
Turning the page is a valid choice though. Sometimes a clean slate is what you need.
> Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.
Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.
> For example, should we let people die rather than use medical knowledge gained unethically?
Depends if you are doing it 'for their own good' or not.
Also the ends do not justify the means in the world of morals we are discussing -- that is pragmatism / utilitarianism and belongs to the world of the material not the ideal.
Finally - Who determines what is ethical? beyond the 'golden rule'? This is the most important factor. I'm not implying ethics are ALL relative, but beyond the basics they are, and who determines that is more important than the context or the particulars.
>Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.
Lots of room for nuance here, but generally Id say its more pragmatic to pivot your business to one that aligns with your morals and is still feasible, rather than convince yourself youre going to influence something you have no control over while compromising on your values. i am going to emphasize the relevance of something being an actual moral or ethical dilemma vs something being a very deep personal preference or matter of identity/personal branding.
>Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.
I agree, it is a real loss and I don't mean for it to be treated lightly but if we are talking about morals and potentially feeling forced to compromise them in order to survive, we should acknowledge it's not really a survival situation.
>Depends if you are doing it 'for their own good' or not.
what do you mean by this?
I am not posing a hypothetical. modern medicine has plenty of contributions to it from unethical sources. Should that information be stripped from medical textbooks and threaten to take licenses away from doctors who use it to inform their decision until we find an ethical way to relearn it? Knowing this would likely allow for large amounts of suffering to go untreated that could have otherwise been treated? I am sincerely trying not to make this sound like a loaded question
also, this is not saying the means are justified. I want to reiterate my point of explicitly not justifying the means and saying the actors involved in the means should be held maximally accountable.
I would think from your stance on the first bullet point you would agree here - as by removing the product from the process you are able to adapt it to your morals.
>Finally - Who determines what is ethical?
I agree that philosophically speaking all ethics are relative, and I was intending to make my point from the perspective of navigating the issues as in individual not as a collective making rules to enforce on others. So you. you determine what is ethical to you
However, there are a lot of systems already in place for determining what is deemed ethical behavior in areas where most everyone agrees some level of ethics is required. This is usually done through consensus and committees with people who are experts in ethics and experts in the relevant field its being applied to.
AI is new and this oversight does not exist yet, and it is imperative that we all participate in the conversation because we are all setting the tone for how this stuff will be handled. Every org may do it differently, and then whatever happens to be common practice will be written down as the guidelines
>It's final.
You should tell that to all the failed businesses Jobs had or was ousted out of. Hell, Trump hasn't really had a single successful business in his life.
Nothing is final until you draw your last breath.
>Who determines what is ethical? beyond the 'golden rule'?
To be frank, you're probably not the audience being appealed to in this post if you have to suggest "ethics can be relative". This is clearly a group of craftsmen offering their hands and knowledge. There are entire organizations who have guidelines if you need some legalese sense of what "ethical" is here.
> but once the damage is done why let it happen in vain?
Because there are no great ways to leverage the damage without perpetuating it. Who do you think pays for the hosting of these models? And what do you mean by distribute the IP and profits to the public? If this process will be facilitated by government, I don’t have faith they’ll be able to allocate capital well enough to keep the current operation sustainable.
>but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
Depends. Is it better to be "wrong" and burn all your goodwill for any future endeavors? Maybe, but I don't think the answer is clear cut for everyone.
I also don't fully agree with us being the "minority". The issue is that the majority of investors are simply not investing anymore. Those remaining are playing high stakes roulette until the casino burns down.
> but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
yes [0]
Can you... elaborate?
Not the parent.
I believe that they are bringing up a moral argument. Which I'm sympathetic too, having quit a job before because I found that my personal morals didn't align with the company, and the cognitive dissonance to continue working there was weighing heavily on me. The money wasn't worth the mental fight every day.
So, yes, in some cases it is better to be "right" and be forced out of business than "wrong" and remain in business. But you have to look beyond just revenue numbers. And different people will have different ideas of "right" and "wrong", obviously.
Moral arguments are a luxury of thinkers and only a small percentage of people can be reasoned with that way anyways. You can manipulate on morals but not reason in most cases.
Agreed that you cannot be in a toxic situation and not have it affect you -- so if THAT is the case -- by all means exit asap.
If it's perceived ethical conflict the only one you need to worry about is the golden rule -- and I do not mean 'he who has the gold makes the rules' I mean the real one. If that conflicts with what you are doing then also probably make an exit -- but many do not care trust me... They would take everything from you and feel justified as long as they are told (just told) it's the right thing. They never ask themselves. They do not really think for themselves. This is most people. Sadly.
But the parent didn't really argue anything, they just linked to a Wikipedia article about Raytheon. Is that supposed to intrinsically represent "immorality"?
Have they done more harm than, say, Meta?
>they just linked to a Wikipedia article about Raytheon
Yeah, that's why I took a guess at what they were trying to say.
>Is that supposed to intrinsically represent "immorality"?
What? The fact that they linked to Wikipedia, or specifically Raytheon?
Wikipedia does not intrinsically represent immorality, no. But missile manufacturing is a pretty typical example, if not the typical example, of a job that conflicts with morals.
>Have they done more harm than, say, Meta?
Who? Raytheon? The point I'm making has nothing to do with who sucks more between Meta and Raytheon.
Well, sure, I'm not disagreeing with the original point about moral choice, and in fact I agree with it (though I also think that's a luxury, as someone else pointed out).
But if someone wants to make some blanket judgement, I am asking for a little more effort. For example, I wonder if they would think the same as a Ukrainian under the protection of Patriot missiles? (also produced by Raytheon)
Here are Raytheon part markings on the tail kit of a GBU-12 Paveway glide bomb that Raytheon sold to a corrupt third word dictator, who used that weapon to murder the attendees of an innocent wedding in a country he was feuding with.
https://www.bellingcat.com/news/middle-east/2018/04/27/ameri...
I know the part number of every airplane part I have ever designed by heart, and I would be horrified to see those part numbers in the news as evidence of a mass murder.
So, what is your moral justification for defending one of the world’s largest and despised weapons manufacturers? Are you paid to do it or is it just pro-bono work?
Excuse me, do you make personal attacks on anyone who dares ask for an actual reasoned argument?
Most if not all aerospace companies also produce military aircraft, right? Or is your reasoning that if your particular plane doesn't actually fire the bullets, then there's no moral dilemma?
Defending? I am simply pointing out the obvious flaws in your logic.
If you think Raytheon is the apex evil corporation you are very mistaken. There is hardly any separation between mega corps and state above a certain level. The same people are in majority control of IBM, Procter & Gamble, Nike, and Boeing, Lockheed Martin, etc, etc.
Stop consuming marketing materials as gospel.
What you see as this or that atrocity on CNN or whatever that is produced *propaganda*, made for you, and you are swallowing it blindly without thinking.
Also the responsibility is of course down to individuals and their actions-- whether you know their names or not. Objects do not go to war on their own.
I've also worked in aerospace and aviation software but that doesn't preclude me from thinking clearly about whether I'm responsible for this or that thing on the news involving planes -- you might want to stop consuming that.
Has anyone considered that the demand for web sites and software in general is collapsing?
Everyone and everything has a website and an app already. Is the market becoming saturated?
I know a guy who has this theory, in essence at least. Businesses use software and other high-tech to make efficiency gains (fewer people getting more done). The opportunities for developing and selling software were historically in digitizing industries that were totally analog. Those opportunities are all but dried up and we're now several generations into giving all those industries new, improved, but ultimately incremental efficiency gains with improved technology. What makes AI and robotics interesting, from this perspective, is the renewed potential for large-scale workforce reduction.
And new companies are created every day, and new systems are designed every day, and new applications are needed every day.
The market is nowhere close to being saturated.
You're just begging the question.
What are examples of these "new applications" that are needed every day? Do consumers really want them? Or are software and other companies just creating them because it benefits those companies?
Most of the software written worldwide is created for internal company usage. Consumers don't even know that it exists.
I've worked (still do!) for engineering services companies. Other businesses pay us to build systems for them to either use in-house or resell downstream. I have to assume that if they're paying for it, they see profit potential.
I think your post pretty well illustrates how LLMs can and can't work. Favoriting this so I can point people to it in the future. I see so many extreme opinions on it like from how LLM is basically AGI to how it's "total garbage" but this is a good, balanced - and concise! - overview.
markets are not binary though, and this is also what it looks like when you're early (unfortunately similar to when you're late too). So they may totally be able to carve out a valid & sustainable market exactly because theyu're not doing what everyone else is doing right now. I'm currently taking online Spanish lessons with a company that uses people as teachers, even though this area is under intense attack from AI. There is no comparison, and what's really great is using many tools (including AI) to enhance a human product. So far we're a long way from the AI tutor that my boss keeps envisioning. I actually doubt he's tried to learn anything deep lately, let alone validated his "vision".
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
> If AI can make things 1000x more efficient,
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week
40h is probably up from pre-industrial times.
Edit: There is some research covering work time estimates for different ages.
Let's kill this myth that people were lounging around before the Industrial Revolution. Serfs for example were working both their own land as well as their lord's land, as well as doing domestic duties in the middle. They really didn't have as much free time as we do today, plus their work was way more backbreaking, literally, than most's cushy sedentary office jobs.
We could probably argue to the end of time about the qualitative quality of life between then and now. In general a metric of consumption and time spent gathering that consumption has gotten better over time.
I don't think a general sentiment matters much here when the important necessitate are out of reach. The hierarchy of needs is outdated, but the inversion of it is very concerning.
We can live without a flat screen TV (which has gotten dirt cheap). We can't live without a decent house. Or worse, while we can live in some 500 sq ft shack we can't truly "live" if there's no other public amenities to gather and socialize without nickel-and-diming us.
What was all this free time spent doing in the pre-industrial era?
pre-industrial? Lots of tending to the farm, caring for family, and managing slaves I suppose. Had some free time between that to work with your community for bonding or business dealings or whatnot.