A competitor crippled a $23.5M bootcamp by becoming a Reddit moderator

larslofgren.com

1287 points by SilverElfin 5 days ago


https://archive.ph/w0izj

raincom - 4 days ago

That's common. A marketing company took over r/mattress in order to get rid of any unfavorable reviews and pump up any bed in box mattress company as long as these companies pay to that market company. For more, https://www.reddit.com/r/MattressMod/comments/1c28g7b/recent...

Havoc - 4 days ago

Used to moderator a decent sized sub for a decent stint. Learned a fair bit from it. Eventually decided to step back because it’s a raw deal - all interactions are antagonistic, the torrent of confrontations is essentially endless, it’s not seen or appreciated by users and obviously not paid.

So not much of a personal payoff, right? UNLESS you’re the kind of person that thrives on drama, conflict and power trips.

Meaning this actively filters for people that are radioactively toxic

jibal - 4 days ago

The discussion at https://larslofgren.com/codesmith-reddit-reputation-attack/ is not going well for Mr. Novati.

Edit: oops, that was the wrong link! I meant to link to the reddit post discussing that article: https://www.reddit.com/r/codingbootcamp/comments/1o1guxj/tho...

I think one of the most telling facts is that pro-Novati poster u/Ok-Donuts has posted numerous comments that are clearly a violation of all norms but seems to be immune to moderation.

sixhobbits - 4 days ago

FWIW, there are similar top-of-google and top-of-reddit results for all bootcamps. Try googling 'lambda school', 'hyperiondev', 'coding temple', 'le wagon' along with 'reddit', 'review' 'legit' or anything similar (or often that's not even needed.

In the end, these bootcamps charge people thousands of dollars to sell them the dream of getting a high paying job after 3 months of part time work, and that's just not realistic.

In addition, in order to survive and sell to consumers, most bootcamps are 90% sales and marketing, and 10% education. They use their own students to teach the next generation, and increase their job placement rates (if you hire your own grad you can claim that that grad got a job!).

I used to work in the industry, and in theory I think it's great to have alternatives to universities which can be elitest and out-of-date with new tech, but I left because it felt kinda like the used-car market of CS, and I don't think it's a great model overall.

shagie - 4 days ago

Reddit has problems with moderation being too easy and too difficult.

It is very easy to ban someone. Making the ban permanent and combining this with the moderator blocking the person (so they can't send messages), there's no appeal process.

Another part is that for any sub of reasonable volume, trying to actively moderate and shape beyond banning the most egregious actors is difficult. Deleting and locking posts for a finer level of moderation is time consuming. The judgement calls of "when is this going off the rails?" become more snap over time.

With the time consuming nature of actually moderating a sub and the ease of just banning someone - moderation becomes the policy of whoever has the most time. The stereotypical variations of this are the paid social media manager who's job it is to scrub anything positive of a competitor or negative about their brand, or a person who is moderating because of a deep interest in the subject but with strong opinions too.

With multiple active moderators, the most extreme views of each in turn become the overall "moderation philosophy" (and if those views are opposed the oldest one wins).

Combined with the echo chamber nature of the message board, the more and more extreme stances become the dominant stances.

To try to present a consistent approach to moderation (Reddit has gotten burned by inconsistent responses many times in the past) it appears that Reddit.inc is trying to be completely hands off. That in turn means that it takes extreme situations for corporate to get involved - often long after it's been a problem that they've been alerted to. Having let the problem fester for so long, when something is done, it tends to be very heavy handed, lopsided, and generates a significant amount of discontent that spreads elsewhere.

So, you've got a site that hosts thousands of message boards, that inevitably grow more and more partisan to one extreme or the other, are mostly facades for a corporation, or propaganda for a political organization.

It is impressive that it has remained "stable" for as long as it has.

jacquesm - 4 days ago

I've seen a lot of shady moderation on reddit and it's one reason I quit using it. There is the obvious brigading, mods on powertrips and but also massive probably paid astro turfing campaigns. Reddit has gone downhill substantially in the last five years. HN is not immune either, but at least we dont' have a 'mods on powertrips' problem, in fact the opposite.

karlkloss - 4 days ago

Moderators are the reason why I stopped using Reddit years ago. Every idiot can become a moderator, and there seem to be no rules for them. Suppressing free speech and banning everyone that doesn't share their opinion seems to be ok for them.

pkphilip - 4 days ago

Reddit has a huge moderation issue. Mods run the place like their fiefdoms with no regard to being fair. There should be a way of flagging reddit users and especially mods if they are seen to have a clear conflict of interest (as is the case with Michael Novati) and Reddit should not allow them to run groups where they are openly harassing their competitors.

trilogic - 4 days ago

This is has to stop right now as it has gone on long enough. Reddit, Google, ProductHunt, Youtube and friends are continuously using their dirty, unethical even illegal techniques driven by profit. I have experienced all myself and I can confirm that the writer is 100% correct. He forgot to mention though that all this is driven by the same agenda, the same people that want to control the narrative. I Wrote about it too : https://medium.com/@klaudibregu/hugstonone-empowering-users-... Now OpenAI joined the agenda and they are playing dirty very hard also. Yesterday Huggingface Deleted the account of a talented User @BasedBase which was creative in open weights (threatening the big techs). The same boot army discredit and reported his work in Huggingface and Reddit till all his accounts were off. They have done the same to me personally since ever started with Hugston.com and HugstonOne. Just Try to google my Company name "Sverken" (that was associated with Hugston.com) It comes out Porn and prostitution services. Even though this is illegal and screw our reputation Google thinks that this is legit and wont take down the information. Instead they decided to put it in the first page ranking first. I have made some calculations and HugstonOne it is indeed very threatening to big techs. If Our Local AI App takes away only 0.0001% of users from proprietary model websites OpenAI, that is a huge amount of money. And that is just one of them. They have tried everything possible to shut us up, to suppress and undermine our work, to discredit us in abusive ways, but they wont succeed. Thank you for speaking up, hope many other do as well. I really wish you get on your feet soon and the best of luck.

Lerc - 4 days ago

The LLM aspect of this, I think shows both a common weakness and an opportunity.

If you suspect something is a commonly held misconception, frequently asking a LLM about it is close to useless, because the abundance of text repeating the misconception (it is common after all) just makes the model repeat the falsehood. Asking the model to apply a more balanced view quite often triggers an iconoclastic antagonism which will just give you the opposite of whatever the commonly held opinion. I have had some success in asking for the divergence between academia and public opinion on particular issues.

Models are still very bad at determining truth from opinion, but these are exactly the times when we need the models to work the best.

There may be an opportunity if there are enough known examples of instances like this story for a dataset to be made where a model can be taught to identify the difference between honest opinion and bad faith actors, or more critically identify confidently asserted statements from those supported by reasoning.

Unfortunately I can see models that can identify such falsehoods being poorly received. When a model can say you are wrong and everybody around you says you are right, what are the chances of people actually considering the possibility that the model is, in fact, correct?

A_D_E_P_T - 4 days ago

Reddit should not be considered an authoritative source. Period. At this point it's the most astroturfed place on the internet. Accounts are bought and sold like cheap commodities. It's inherently unreliable.

That said, in this instance Codesmith actually has an unusually strong defamation case. That Reddit mod is not anonymous, and has made solid claims (about nepotism with fabricated details, accusations of resume fraud conspiracy, etc.) that have resulted in quantifiable damage ($9.4M in revenue loss attributed to Reddit attacks,) with what looks like substantial evidence of malice.

Reddit, though protected to some extent by Section 230, can also credibly be sued if (1) they are formally alerted to the mod's behavior, i.e. via a legal letter, and (2) they do nothing despite the fact that the mod's actions appear to be in violation of their Code of Conduct for Moderators. For then matter (2) might become something for a judge or jury to decide.

I'm actually confused as to why Codesmith hasn't sued yet. (?!?) Even if they lose, they win. Being a plaintiff in a civil case can turn the tables and make them feel powerful rather than helpless, and it's often the case that "the process is the punishment" for defendants.

analog8374 - 4 days ago

Michael reminds me of a fellow named ewk, from the zen subreddit. In his obsessive energy and poisonous tactics. It really is a thing to see. A type. There must be a name for it

neilv - 4 days ago

I don't know about this particular case, but, generally... bad actor subreddit moderators have been an occasional thing for well over a decade.

And it's also been widely known for that long that Reddit is an influential venue in which to take over a corner -- for marketing or propaganda.

What's an equal concern to me is how insufficiently resilient Reddit collectively appears to be, in face of this.

A bad actor mod of a popular subreddit can persist for years, visibly, without people managing either to oust the mod, or to take down the sub's influence.

(Subreddit peasants sometimes migrate to a new sub over bad mods, but the old sub usually remains, still with a healthy brand. And still with a lot of members, who (speculating) maybe don't want to possibly miss out on something in the bad old sub, or didn't know what's going on, or the drama they noticed in their feed wasn't worth their effort to do the clicks to unjoin from the sub in question.)