(They/Them) I like TTRPGs, history, (audio and written) horror and the history of occultism.

  • 0 Posts
  • 49 Comments
Joined 5 months ago
cake
Cake day: January 24th, 2025

help-circle


  • Misogyny in stuff can be really complicated. Sometimes you can only really see it holistically, and sometimes it’s only in specifics. Sometimes a story will give a woman a lot of focus, place her feelings and emotions in the spotlight and give her actions the most agency and power over the plot- while also having her be inexplicably dressed in lingerie the whole time with a really weak excuse, if any.

    Like, I love FF12. Ashe is undisputably the actual main character in it, and her story is about being a person with authority in a time of war. It’s about grappling with your own grief and desire for revenge, trying to keep in mind your principles and what you believe in. It somehow manages to be both about the divine right of kings and weapons of mass destruction and maintained it’s emotional thru line almost all the way to the end!

    But also, Ashe, that hot pink mini-skirt? Girrrrrl, WTF, you live in a desert. You’re gonna fight things in a skirt made of two pink napkins? There’s no real reason for her to dress like that, and it’s definitely just for fan service!

    I still love the game, but I acknowledge that it has that problem. It objectifies women because it treats them as visual treats and has them dress in bizzare ways that don’t flow adequately from their characterization. This is because of structural societal things, and it sucks for a bunch of reasons.

    Bayonetta is different primarily because the work’s themes are, as I understand them, incredibly positive about women being active, powerful sexual people who do what they want.

    B dresses like that because she likes being hot, and it’s a characterization tool, and it’s never a disempowering thing for her.

    Like, Kill la Kill has ridiculous outfits, but I’ve had multiple women tell me they love it because of how it intersects with things they like. I wasn’t going to watch it until one of them insisted and, yeah, it’s pretty good. The sexual elements are intended and used as part of the narrative, and the emotional thru line is very strong.

    So, it’s one of those things that needs an exhaustive breakdown to really know about in a work. I don’t know enough about this one to say, and I’m just commented in hopes that it’s useful for you or someone else looking at doing media analysis of this type.





  • Hi!

    So most people build their value system upwards from foundational axioms that they accept a priori. You know, someone might begin with a moral principle like, “Happiness is good,” or “I should act with compassion.”

    Then, they construct outwards from there, using their foundational moral touchstones to judge if an action, philosophy or moral principle is worth following, or what compromises must be made with it in order for it to be worth following.

    Like, I wouldn’t expect someone who believes “Happiness is good,” to follow a moral law that causes suffering, because they think happiness is a good thing and should be aspired to.

    If there’s a conflict between two foundational axioms on something, then you have to create a compromise on an area or subject, or create a system of priority for yourself.

    In my case, I think the highest virtue is compassion for other people. Because of this, I think society should be structured to benefit people.

    That’s why I support the idea of public information sources. The spread of accumulated human knowledge, culture and wisdom can benefit everyone by allowing ideas to mix, spread and be worked on by many people. It allows for an incredible richness to people’s lives and I think that it’s a wonderful example of why society should exist.

    Now, there’s a compromise I have to make here, because I don’t place “Publicly available information should be universally and uniformly available to all potential patrons,” as my highest virtue. I support it because of an expression of my values, but not for its own sake.

    Companies are not people. They’re build out of people, but they are not. They’re organizations that are not build with human life as a core value.

    You may leap to saying that I’m calling them murderers or something, if you’re willing to not absorb or consider my words there, because I’m being very precise here- companies have an incentive to make money. Companies which fail to make money cease to exist. Therefore, the companies that are most successful and most likely to exist are those which place profit above all other values in their decision making architecture.

    This is an emergent property of how they are structured, and not a product of any individual person’s desire. The system is built in a way that rewards a behavior, and so it will be organized to optimize that behavior.

    Human life and happiness does not directly lead to companies being successful. It is a secondary concern- companies will pursue it if, and only if, it does not conflict with profit motives. If the cost of ignoring human suffering is below the cost of caring, they will not care. It becomes a public relations issue.

    Because of this, I oppose the existence of for-profit companies because they violate my fundamental philosophical values by driving towards the creation of human suffering. It’s an inevitability of their construction, and rectifying it would require completely reworking how our economic system is built.

    Now, you could say that this is a minor example of this issue, and that I am acting out of proportion. This isn’t hurting anyone in any legally actionable way, and ultimately is a transitory concern. A small restructuring of how things are organized would smooth things over and produce a satisfactory base state of affairs.

    That’s, however, not the point. The point is that a corporation will push as far as they can into consuming public resources, even if this does cause real harm. Allowing them to act in this way, even on a minor issue, would be a break from my moral values. I must oppose them because they do not have the right to cause any amount of harm for the abstract notion of economic progress, especially when they are using it to feed a wrong-headed venture that is consuming other, vitally important resources which humans need to survive in direct and unquestionable ways.

    Massive server farms require electricity, which we have not implemented a widespread way to acquire without causing ecological damage. They require water, which humans need to live.

    LLMs have uses as tools, but they are far outstripped by the way in which corporations wish to use them, which is to reduce the amount of economic support they have to give to other humans because it doesn’t matter to them if you die. It doesn’t even matter to them if it works to replace people, so long as it allows them to increase their profits for even a infinitesimal amount of time.

    They are killing people, and I do not wish to extend any benefit to them. I do not want them to have any additional power, no matter how small or insignificant.

    Of course, with regards to you, there’s only two options I see.

    Either you knew all of this already, and were intentionally playing the fool for, I don’t know, your amusement?

    Or, you somehow entered this discussion with an awe-inspiring lack of awareness of how values and moral systems work and your only method of not being deeply embarrassed by your conduct is to cling to the notion that the simplistic binary you present is somehow relevant.


  • Dude, my problem is that capitalism is going to ruin everything. It is a rotting sickness that cuts through every layer of society and creates systemic, ugly problems.

    Do you know how excited I was when LLM tech was announced? Do you know how much it sucked to realize, so soon, that companies were going to do their best to use it to optimize profits?

    The free access of information problem is just a manifestation of this dark specter on society.

    You are acting as if we can approach this problem in the abstract, where you have to abide by simplistic, binary philosophical rules and not that we live in a world of constant moral compromise and complexity.

    It’s not as simple as, “Oh, you say that you believe in freedom of information, but curious how you don’t want private companies to use it to make money at your expense! Guess you’re a hypocrite.”

    Tell me what you actually believe, or stop cycling back to this like it’s a damning rebuttal.



  • Really?

    Okay, look, the reason people are disagreeing with you is that you’re responding to the following problem:

    “Private companies are preventing access to public resources due to their rapacious, selfish greed.”

    And your response has been:

    “By changing how we structure things to make it easier for them to take things, we can both enjoy the benefits of the public resources.”

    The companies are not the same as normal patrons. They’re motived by a desire for infinite growth and will consume anything that they can access for low prices to resell for high ones. They do not contribute to these public resources, because they only wish to abuse them for the potential capital they have.

    Drawing an equivalence between these two things requires the willful disregard of this distinction so that you can act as if the underlying moral principle is being betrayed because your rhetorical opponent didn’t define it as rigorously as possible. They didn’t do that out of an expectation that you would engage with this in good faith.

    Why are you doing this?




  • There’s a difference between making information accessible to humans for the purposes of advancing our shared knowledge vs saying that public institutions should subsidize the needs of private for-profit organizations.

    It’s like, you can say, “Oh yeah, people should have access to freshwater for free,” and also say, “Companies shouldn’t be allowed to pump infinite freshwater from those sources to bottle it for profit.”

    Those aren’t contradictory if your actual goal is the benefit of humankind and not, like, pendantic genie logic.


  • Search engines are already basically worthless, so I’m not surprised with the falling axe.

    The shift from search engines actually indexing things to search through to trying to parse a question and find an answer has been the most irritating trend for me. I remember when you could just put in a series of words and be delivered unto every indexed page that had all of them.

    Now I regularly get told that even common words don’t exist if I insist that, no, google I do want only searches with the words I put in.

    This is my old person rant, I guess. /s

    This change is probably going to cause huge problems for a lot of existing sites, especially because it means Google will probably start changing their advertising model now that they can consolidate the views into a specific location and pocket the money. The article mentions this, but doesn’t realize the implications.

    “The internet will still be around,” is only true if you hold that the super consolidated, commericalized nexus of doom is going to continue on just fine, while countless small, very useful websites made by actual people for actual reasons fade away into oblivion.

    It sucks to watch something I have loved my whole life die, but it’s going bit by bit because we can’t convince our politicians to do anything about it.




  • Hi, once more, I’m happy to have a discussion about this. I have very firm views on it, and enjoy getting a chance to discuss them and work towards an ever greater understanding of the world.

    I completely understand the desire to push back against certain kinds of “understandings” people have about LLM due to their potentially harmful inaccuracy and the misunderstandings that they could create. I have had to deal with very weird, like, existentialist takes on AI art lacking a quintessential humanity that all human art is magically endowed with- which, come on, there are very detailed technical art reasons why they’re different, visually! It’s a very complicated phenomenon, but, it’s not an inexplicable cosmic mystery! Take an art critique class!

    Anyway, I get it- I have appreciated your obvious desire to have a discussion.

    On the subject of understanding, I guess what I mean is this: Based on everything I know about an LLM, their “information processing” happens primarily in their training. This is why you can run an LLM instance on, like, a laptop but it takes data centers to train them. They do not actually process new information, because if they did, you wouldn’t need to train them, would you- you’d just have them learn and grow over time. An LLM breaks its training data down into patterns and shapes and forms, and uses very advanced techniques to generate the most likely continuation of a collection of words. You’re right in that they must answer, but that’s because their training data is filled with that pattern of answering the question. The natural continuation of a question is, always, an answer-shaped thing. Because of the miracles of science, we can get a very accurate and high fidelity simulation of what that answer would look like!

    Understanding, to me, implies a real processing of new information and a synthesis of prior and new knowledge to create a concept. I don’t think it’s impossible for us to achieve this, technologically, humans manage it and I’m positive that we could eventually figure out a synthetic method of replicating it. I do not think an LLM does this. The behavior they exhibit and the methods they use seem radically inconsistent with that end. Because, the ultimate goal of them was not to create a thinking thing, but to create something that’s able to make human-like speech that’s coherent, reliable and conversational. They totally did that! It’s incredibly good at that. If it were not for the context of them politically, environmentally and economically, I would be so psyched about using them! I would have been trying to create templates to get an LLM to be an amazing TTRPG oracle if it weren’t for the horrors of the world.

    It’s incredible that we were able to have a synthetic method of doing that! I just wish it was being used responsibly.

    An LLM, based on how it works, cannot understand what it is saying, or what you are saying, or what anything means. It can continue text in a conversational and coherent way, with a lot of reliability on how it does that. The size, depth and careful curation of its training data mean that those responses are probably as accurate to being an appropriate response as they can be. This is why, for questions of common knowledge, or anything you’d do a light google for, they’re fine. They will provide you with an appropriate response because the question probably exists hundreds of thousands of times in the training data; and, the information you are looking for also exists in huge redundancies across the internet that got poured into that data. If I ask an LLM which of the characters of My Little Pony has a southern accent, they will probably answer correctly because that information has been repeated so much online that it probably dwarfs the human written record of all things from 1400 and earlier.

    The problem becomes evident when you ask something that is absolutely part of a structured system in the english language, but which has a highly variable element to it. This is why I use the “citation problem” when discussing them, because they’re perfect for this: A citation is part of a formal or informal essay, which are deeply structured and information dense, making them great subjects for training data. Their structure includes a series of regular, repeating elements in particular orders: Name, date, book name, year, etc- these are present and repeated with such regularity that the pattern must be quite established for the LLM as a correct form of speech. The names of academic books are often also highly patterned, and an LLM is great at creating human names, so there’s no problem there.

    The issue is this: How can an LLM tell if a citation it makes is real? It gets a pattern that says, “The citation for this information is:” and it continues that pattern by putting a name, date, book title, etc in that slot. However, this isn’t like asking what a rabbit is- the pattern of citations leads into an endless warren of hundreds of thousands names, book titles, dates, and publishing companies. It generates them, but it cannot understand what a citation really means, just that there is a pattern it must continue- so it does.

    Let me also ask you a counter question: do you think a flat-earther understands the idea of truth? After all, they will blatantly hallucinate incorrect information about the Earth’s shape and related topics. They might even tell you internally inconsistent statements or change their mind upon further questioning. And yet I don’t think this proves that they have no understanding about what truth is, they just don’t recognize some facts as true.

    A flat-earther has some understanding of what truth is, even if their definition is divergent from the norm. The things they say are deeply inaccurate, but you can tell that they are the result of a chain of logic that you can ask about and follow. It’s possible to trace flat-earth ideas down to sources. They’re incorrect, but they’re arrived at because of an understanding of prior (incorrect) information. A flat-earther does not always invent their entire argument and the basis for their beliefs on the spot, they are presenting things they know about from prior events- they can show the links. An LLM cannot tell you how it arrived at a conclusion, because if you ask it, you are just receiving a new continuation of your prior text. Whatever it says is accurate only when probability and data set size is on its side.


  • And, yes, I can prove that a human can understand things when I ask: Hey, go find some books on a subject, then read them and summarize them. If I ask for that, and they understood it, they can then tell me the names of those books because their summary is based on actually taking in the information, analyzing it and reorganizing it by apprehending it as actual information.

    They do not immediately tell me about the hypothetical summaries of fake books and then state with full confidence that those books are real. The LLM does not understand what I am asking for, but it knows what the shape is. It knows what an academic essay looks like and it can emulate that shape, and if you’re just using an LLM for entertainment that’s really all you need. The shape of a conversation for a D&D npc is the same as the actual content of it, but the shape of an essay is not the same as the content of that essay. They’re too diverse, and they have critical information in them and they are about that information. The LLM does not understand the information, which is why it makes up citations- it knows that a citation fits in the pattern, and that citations are structured with a book name and author and all the other relevant details. None of those are assured to be real, because it doesn’t understand what a citation is for or why it’s there, only that they should exist. It is not analyzing the books and reporting on them.


  • Hello again! So, I am interested in engaging with this question, but I have to say: My initial post is about how an LLM cannot provide actual, real citations with any degree of academic rigor for a random esoteric topic. This is because it cannot understand what a citation is, only what it is shaped like.

    An LLM deals with context over content. They create structures that are legible to humans, and they are quite good at that. An LLM can totally create an entire conversation with a fictional character in their style and voice- that doesn’t mean it knows what that character is. Consider how AI art can have problems that arise from the fact that they understand the shape of something, but they don’t know what it actually is- that’s why early AI art had a lot of problems with objects ambigiously becoming other objects. The fidelity of these creations has improved with the technology, but that doesn’t imply understanding of the content.

    Do you think an LLM understands the idea of truth? Do you think if you ask it to say a truthful thing, and be very sure of itself and think it over, it will produce something that’s actually more accurate or truthful- or just something that has the language hall-marks of being truthful? I know that an LLM will produce complete fabrications that distort the truth if you expect a base-line level of rigor from them, and I proved that above, in that the LLM couldn’t even accurately report the name of a book it was supposedly using as a source.

    What is understanding, if the LLM can make up an entire author, book and bibliography if you ask it to tell you about the real world?


  • What’s yours? I’m stating that LLMs are not capable of understanding the actual content of any words they arrange into patterns. This is why they create false information, especially in places like my examples with citations- they are purely the result of it creating “academic citation” sounding sets of words. It doesn’t know what a citation actually is.

    Can you prove otherwise? In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely. An LLM cannot do this. It’s not designed to- there are neural network AI built on similar foundational principles towards divergent goals that can produce remarkable results in terms of data analysis, but not ChatGPT. It doesn’t understand anything, which is why you can repeatedly ask it about a book only to look it up and discover it doesn’t exist.