• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • garyyo@lemmy.worldtoRisa@startrek.websiteSpace is 2D, right?
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Actually, space in general is mostly 2 dimensional, in that all the interesting stuff generally takes place on some sort of almost flat plane. A star system is generally on a plane, so is the galactic system, and for most planet+moons too. They just tend to be different planes so for ease of communication you will probably just align your idea of down with whatever the most convenient plane is. This of course is ignoring what gravity down is, as that changes as thrust does.

    And as for ship alignment, yeah no one is going to worry about that till its time to dock, at which point the lighter vessel will likely change their orientation since its easier and takes less energy. Spaceships are not going to be within human sight range of each other most of the time, even being in relatively the same are. Space too big and getting ships close to each other is dangerous!

    But in media that fucks with people’s idea of meeting and seeing each other so for convenience of not confusing the audience you don’t see that level of realism often.


  • Idk about anyone else but its a bit long. Up to q10 i took it seriously and actually looked for ai gen artifacts (and got all of them up to 10 correct) and then I just sorta winged it and guessed and got like 50% of them right. OP if you are going to use this data anywhere I would first recommend getting all of your sources together as some of those did not have a good source, but also maybe watch out for people doing what I did and getting tired of the task and just wanting to see how well i did on the part i tried. I got like 15/20

    For anyone wanting to get good at seeing the tells, focus on discontinuities across edges: the number or intensity of wrinkles across the edge of eyeglasses, the positioning of a railing behind a subject (especially if there is a corner hidden from view, you can imagine where it is, the image gen cannot). Another tell is looking for a noisy mess where you expect noisy but organized: cross-hatching trips it up especially in boundary cases where two hatches meet, when two trees or other organic looking things meet together, or other lines that have a very specific way of resolving when meeting. Finally look for real life objects that are slightly out of proportion, these things are trained on drawn images, and photos, and everything else and thus cross those influences a lot more than a human artist might. The eyes on the lego figures gave it away though that one also exhibits the discontinuity across edges with the woman’s scarf.









  • Always has been. The laws are there to incentivize good behavior, but when the cost of complying is larger than the projected cost of not complying they will ignore it and deal with the consequences. For us regular folk we generally can’t afford to not comply (except for all the low stakes laws that you break on a day to day basis), but when you have money to burn and a lot is at stake, the decision becomes more complicated.

    The tech part of that is that we don’t really even know if removing data from these sorts of model is possible in the first place. The only way to remove it is to throw away the old one and make a new one (aka retraining the model) without the offending data. This is similar to how you can’t get a person to forget something without some really drastic measures, even then how do you know they forgot it, that information may still be used to inform their decisions, they might just not be aware of it or feign ignorance. Only real way to be sure is to scrap the person. Given how insanely costly it can be to retrain a model, the laws start looking like “necessary operating costs” instead of absolute rules.


  • The real AI, now renamed AGI, is still very far

    The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don’t know.


  • Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can run a model that can talk like a human.

    Being able to talk like a human used to be what the layperson would consider AI, now it’s not even AI, it’s just crunching numbers. And this has been happening throughout the entire history of the field. You aren’t going to change this person’s mind, this bullshit of discounting the advancements in AI has been here from the start, it’s so ubiquitous that it has a name.

    https://en.wikipedia.org/wiki/AI_effect


  • A given programming language often has limitations which are largely different than the limitations from others. This means that different languages are often used on different kinds of problems. Want something fast, use C. Want to write something quickly, use python. Want it to run on just about anything, use Java. And so on.

    So why don’t we make one ultimate one or a few that fulfill all needs? Well, partially because we haven’t figured out how to do that, but also it’s really easy to learn yet another language once your understand how they work. I can write in python, js, c, c++, c#, Java, kotlin, rust, perl, ruby, php, forth, lisp, and I could keep on going for quite a while. The underlying concepts are largely the same and so picking up a new language is no big deal (though being good at it is a bigger deal). We have so many because ultimately it just doesn’t really matter that we have so many.


  • We don’t understand it because no one designed it. We designed how to train a nn, we designed some parts of the structure, but not the individual parts inside. For the largest LLMs there are upwards of 70 billion different parameters. Each being individual numbers they were can tweak. The are just too many of them to understand what any individual one does, and since we just left a optimization algorithm do it’s optimizing we can’t really even know what groups of them do.

    We can get around this, we can study it like we do the brain. Instead of looking at what an individual part does, group them together and figure out how they group influences things (AI explanability), or even get a different NN to look at it and generate an explanation (post hoc rationale generation). But that’s not really the same as actually understand what it is actually doing under the hood. What it is doing under the hood is more or less fundamentally unknowable, there is just to much information and it’s not well organized enough for us to be able to understand. Maybe one day we will be able to abstract what is going on in there and organize it in an understandable manner, but not yet.


  • One thing to note is that making an industry more efficient (like translating, which gpt is really good at, much better than google translate but not necessarily better than existing tools) comes with a decrease in the amount of jobs. Tech doesn’t have to eliminate the human portion, but if it even makes one more human twice as efficient in their job, thats half the humans you need doing that job for the same amount of work output.

    That being said this is not a great infographic for this topic.