• 1 Post
  • 3.12K Comments
Joined 1 year ago
cake
Cake day: February 10th, 2025

help-circle
  • In my testing, by copying the claimed ‘prompt’ from the article into Google Translate, it simply translated the command. You can try it yourself.

    So, the source of everything that kicked off the entire article, is ‘Some guy on Tumblr’ vouching for an experiment, which we can all easily try and fail to replicate.

    Seems like a huge waste of everyone’s time. If someone is interested in LLMs, then consuming content like in the OP feels like knowledge but it often isn’t grounded in reality or is framed in a very misleading manner.

    On social media, AI is a topic that is heavily loaded with misinformation. Any claims that you read on social media about the topic should be treated with skepticism.

    If you want to keep up on the topic, then read the academia. It’s okay to read those papers even if if you don’t understand all of it. If you want to deepen your knowledge on the subject, you could also watch some nice videos like 3Blue1Brown’s playlist on Neural Networks: https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi. Or brush up on your math with places like Khan Academy (3Blue1Brown also has a good series on Linear Algebra if you want more concepts than calculations).

    There’s good knowledge out there, just not on Tumblr



  • A bit flip, but this reads like people discovering that there is a hammer built specifically for NASA with specific metallurgical properties at the cost of $10,000 each where only 5 will ever be forged, because they were all intended to sit in a space ship in orbit around the Moon.

    Then someone comes along and posts and article about a person who posted on Tumblr about how they were surprised that one was used to smash out a car window to steal a Door Dash order.


    LLMs will always be vulnerable to prompt injection because of how they function. Maybe, at some point in the future, we’ll understand enough about how LLMs represent knowledge internally so that we can craft specific subsystems to mitigate prompt injection… however, in 2026, that is just science fiction.

    There are actual academic projects which are studying the boundaries of the prompt-injection vulnerabilities if you read in the machine learning/AI journals. These studies systemically study the problem, gather data and demonstrate their hypothesis.

    One of the ways you can tell real Science from ‘hey, I heard’ science is that real science articles don’t start with ‘Person on social media posted that they found…’

    This is a very interesting topic and if you’re interested you can find the actual science by starting here: https://www.nature.com/natmachintell/.






  • There’s a third path, where creatives can use tools for “real-time dreaming” and also the companies who try to replace workers with AI fail because they put out terrible products untouched by human creativity.

    It looks like Simon is, incorrectly in my opinion, accepting the premise that’s being sold to the investors. i.e. That using AI will allow the replacement of human labor. Saying that it is obvious that this will happen in 2 years or 10 years.

    In order to believe that, you would have to believe that AI is capable of replacing humans. I don’t think that is going to be the case.

    It is simply not supported by evidence or any existing technology. At best, AI is a semi-useful tool for some tasks in programming or art. There is no use-case in either of these fields that AI performs better than or even equal to that of trained humans.

    It’s far more likely that we’re in a world where a bunch of publicly traded game studios are about to throw themselves off a cliff chasing AI while getting rid of their talent. Studios that use actual humans (using whatever tools they choose to use) will always have an advantage over anything built primarily on AI output…



  • Yeah, the bounce is scary. Def. a steel toed boots kind of job.

    A, reasonably sharp, maul usually doesn’t bounce, as long as it hits square… but, if the angle is off and the head rotates instead of chops, it can give your arm a bit of a yank.

    I had to do it the old fashioned way until I graduated college. :x Now, I’m old and lazy (and it’s also Florida weather, so wood is more of a cooking ingredient than a source of heat) so I just borrow a hydraulic splitter for an afternoon once a year.






  • Add this to the giant list of things to fix after the fascist revolution is defeated.

    Companies should not be able to market ‘Entertainment news shows’ or ‘Opinion news shows’ as actual fact-based objective reality ‘News’. Just like manufacturers can’t label peanut butter as ‘allergen free’, we understand that product labels are important information for a consumer so that they can make informed choices.

    Having TV shows like Fox News being able to pretend to be a real news organization is one of the first weaknesses that was exploited in the media system.





  • Smaller communities, where you actually know the screen names of a lot of the active users, are much higher quality in terms of actual conversation.

    On Reddit, for example, you’re rarely talking to anybody in specific, just yelling into the void. This is painfully obvious if you ever try to engage someone who only exists on commercial social media. Their ability to have a conversation or entertain another point of view is almost non-existent. Their comments read more like they were written to chase upvotes than in an actual attempt to engage with the human on the other side of the conversation.

    The influencer-driven obsession with views/likes/subscribers makes people think that small communities (like this one) are somehow worse. That’s just not true, as long as your goal is more ‘social’ and less ‘media’.


  • The big danger here, which these steps mitigate but do not solve are:

    #1 Algorithmically curated content

    On the various social media, there are systems of automated content moderation that are in place that remove or suppress content. Ostensibly for protecting users from viewing illegal or disturbing content. In addition, there are systems for recommending content to a user by using metrics for the content, metrics for the users combined with machine learning algorithm and other controls which create a system of controls to both restrict and promote content based on criteria set by the owner. We commonly call this, abstractly, ‘The Algorithm’ Meta has theirs, X has theirs, TikTok has theirs. Originally these were used to recommend ads and products but now they’ve discovered that selling political opinions for cash is a far more lucrative business. This change from advertiser to for-hire propagandist

    The personal metrics that these systems use are made up of every bit of information that the company can extract out of you via your smartphone, linked identity, ad network data and other data brokers. The amount of data that is available on the average consumer is pretty comprehensive right down to knowing the user’s rough/exact location in real-time.

    The Algorithm used by social media companies are a black box, so we don’t know how they are designed. Nor do we know how they are being used at any given moment. There are things that they are required to do (like block illegal content) but there are very little, if any, restrictions on what they can block or promote otherwise nor are there any reporting requirements for changes to these systems or restrictions on selling the use of The Algorithm for any reason whatsoever.

    There have been many public examples of the owners of that box to restricting speech by de-prioritizing videos or restricting content containing specific terms in a way that imposes a specific viewpoint through manufactured consensus. We have no idea if this was done by accident (as claimed by the companies, when they operate too brazenly and are discovered), if it was done because the owner had a specific viewpoint or if the owner was paid to impose that viewpoint.

    This means that our entire online public discourse is controllable. That means of control is essentially unregulated and is increasingly being used and sold for, what cannot be called anything but, propaganda.

    #2 - There is no #2, the Algorithms are dangerous cyberweapons, their usage should be heavily regulated and incredible restrictions put on their use against people.