• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • isildun@sh.itjust.workstoGreentext@sh.itjust.worksAnon boots up a game
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Actually, this isn’t the worst idea. It can be hard to tell what kind of input device the player’s using, especially on PC. Are you using kb+m, xbox controller, psx controller, generic bargain bin controller, etc? Also you can’t just assume that because a controller’s connected the player is going to use it (and lots of games do… much to my dismay since they make me go disconnect the controller). Once the player presses at least one button you can tailor all the inputs to that thing.


  • The long story short is that you are being made to (by default) give up rights that you should have, particularly around class action lawsuits. It’s strictly bad for you and strictly good for the company. They probably shouldn’t be allowed to do this. Since they are, the only thing we can do to protest it is to opt-out.

    Maybe you’ll never sue discord. But maybe someday there will be a lawsuit brought against discord by someone else. A few ideas for topics might include a security vulnerability that leaks personal information, the use of discord content for AI training data (e.g. copyright issues), or the safety of minors online. If you don’t opt-out, you can’t be a part of such lawsuits if they ever become relevant. This overall weakens these lawsuits and empowers companies like discord to do more shady things with less fear of repercussions.

    And, since the vast majority of people will never opt-out (since you’re opted in by default) these kinds of lawsuits are weakened from the start. That’s why every company in the US is doing this forced arbitration thing. At this point, they would be crazy not to since it’s such a good thing for them and the average person doesn’t care enough about it.


  • isildun@sh.itjust.workstoTechnology@lemmy.worldHow Quora Died
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    I’m almost starting to wonder if that’s the plan. Just keep saying “IPO IPO IPO” to get funding from over-eager VCs who want a piece of the IPO before it becomes widely available.

    But then you just never IPO. Keep making minor to moderate mistakes along the way so you can be all “weeeeell we would have IPO’d but insert thing here so we want to wait another 6 months to let it die down”. Repeat until you’re ready to quit, then actually IPO and ride the initial IPO high all the way down via golden parachute.





  • I’m not the person who found it originally, but I understand how they did it. We have three useful data points: you are 2.6 km from Burger King in Italy, that BK is on a street called "Via " and you are 9792 km from Burger King in Malaysia.

    1. The upper BK in Malaysia is not censored, so we have its exact location.
    2. Find a place in Italy that is 9792 km away using the Measure Distance tool on something like Google Maps.
    3. Even though there are potentially multiple valid locations in Italy, we know you’re within 2.6 km of another BK. Florence is sensible because there are BKs near the 9792 km mark.
    4. Once we do that, we can find a spot that is both 9792 km from Malaysia BK and 2.6 km from a nearby BK on a street called “Via”, effectively finding where the image was taken.


    It’s not perfect but it works well! This is the principle of how your GPS works. It’s called triangulation. We only had distance to two points and one of them doesn’t tell us the sub-kilometer distance. If we had distance to three points, we could find your EXACT location, within some error depending on how detailed the distance information was.


  • Copilot, yes. You can find some reasonable alternatives out there but I don’t know if I would use the word “great”.

    GPT-4… not really. Unless you’ve got serious technical knowledge, serious hardware, and lots of time to experiment you’re not going to find anything even remotely close to GPT-4. Probably the best the “average” person can do is run quantized Llama-2 on an M1 (or better) Macbook making use of the unified memory. Lack of GPU VRAM makes running even the “basic” models a challenge. And, for the record, this will still perform substantially worse than GPT-4.

    If you’re willing to pony up, you can get some hardware on the usual cloud providers but it will not be cheap and it will still require some serious effort since you’re basically going to have to fine-tune your own LLM to get anywhere in the same ballpark as GPT-4.