• 0 Posts
  • 45 Comments
Joined 7 months ago
cake
Cake day: November 21st, 2023

help-circle


  • Ekky@sopuli.xyztoLemmy Shitpost@lemmy.worldWhich is which?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 month ago

    Hmm, well, I have heard women being compared to singing birds (or more degrading as vultures or pen of hens if in group), but I’ve more often heard women being romantically compared to bees or flowers. Though, I don’t think I’ve ever heard men being compared to bees, but often to birds (eagles, vultures, seagulls, etc.).

    Might also be local culture, as I usually think of harmony, nature, and perhaps matriarchy when pondering bees, while birds seem much more gender neutral, like, standoff-ish, elegant, brutal, impulsive, egoistic, even presented as predatory and evil in children movies and some media.

    So, using common stereotyping, you can see where I’m coming from.


  • Ekky@sopuli.xyztoLemmy Shitpost@lemmy.worldWhich is which?
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 month ago

    Thank you for the explanation.

    As someone not too familiar with American cultures, I’d probably make an assumption and go for the (to me) more masculine bird over the docile and flower loving bee, since bees have stingers that they normally would never use and birds have beaks/peckers.




  • To expand a little:

    While it indeed is annoying, it did mostly go as expected, as in, law makers must always be ready for companies responding to new and more restrictive laws with malicious compliance.

    The vast majority of websites don’t actually follow the rules for cookie banners or implement them in as roundabout a way as possible, making them needlessly annoying as it should always be easier and at least as fast to decline than to accept.

    While this all sounds like cookie banners ultimately are a failure because of the misimplementations that companies provided in response, it does function as an eye opener for the common man and stepping stone for the EU for further laws and fines in regard to citizens’ rights to privacy.




  • People who rely too heavily on autocorrect do already now cause misunderstandings by writing something they did not intend to.

    I had a friend during uni who was dyslectic, and while the words in his messages were written proper you still had to guess the context from the randomly thrown together words he presented you with.

    Now that we can correct not only a single word or roughly the structure of a sentence, but instead fabricate whole paragraphs and articles by providing a single sentence, I imagine we will see a stark increase in low-quality content, accidental false information, and easily preventable misunderstandings - More than we already have.




  • Agreed for induction, but I’d mich rather use one or two minutes more cleaning the knobs than having to almost cook my finger on this 60-90 degree Celcius hot conventional stove’s touch surface to change the plate from step 7 to 4 for 10 FUKKEN SECONDS! OUCH!

    Having to restart it 2-3 times during cooking because it got confused (pan moved slightly to the side) is also rather annoying.

    Edit & tl:dr: Touch works decent on induction, just please keep it far away from any conventional stoves.




  • Same, I’ve got an Opel Corsa from 2016, so it’s pretty much brand new.

    The only things in the wheel are the speed control, wipers, and default lights.

    For everything else required for driving, such as fog lights, emergency lights, front and back Window heating, AC, radio, and of course the shift stick, I’ll need to remove a hand from the wheel.

    Luckily for me, the Touchscreen in the middle only handles less important things like navigation and external music sources.



  • Neural nets are a technology which is part of the umbrella term “machine learning”. Deep learning is also a term which is part of machine learning, just more specialized towards large NN models.

    You can absolutely train NNs on your own machine, after all, that’s what I did for my masters before Chatgpt and all that, defining the layers myself, and also what I do right now with CNNs. That said, LLMs do tend to become so large that anyone without a super computer can at most fine tune them.

    “Decision tree stuff” would be regular AI, which can be turned into ML by adding a “learning method” like a KNN or neural net, genetic algorithm, etc., which isn’t much more than a more complex decision tree where decision thresholds (weights) were automatically estimated by analysis of a dataset. More complex learning methods are even capable of fine tuning themselves during operation (LLMs, KNN, etc.), as you stated.

    One big difference from other learning methods and to NN based methods, is that NN likes to add non-weighted layers which, instead of making decisions, transform the data to allow for a more diverse decision process.

    EDIT: Some corrections, now that I’m fully awake.

    While very similar in structure and function, the NN is indeed no decision tree. It functions much the same as one, as is a basic requirement for most types of AI, but whereas every node in a decision tree has unique branches with their own unique nodes, all of a NN’s nodes are interconnected to all nodes of the following layer. This is also one of the strong points of a NN, as something that seemed outrageous to it a moment ago might have become much more plausible when looking at it from a different point of view, such as after a transformative layer.

    Also, other learning methods usually don’t have layers, or, if one were to define “layer” as “one-shot decision process”, they pretty much only have a single or two layers. In contrast, the NN can theoretically have an infinite amount of layers, allowing for pretty much infinite complexity as long as the inputted data is not abstracted beyond reason.

    At last, NN don’t back-propage by default, though they make it easy to enable such features given enough processing power and optionally enough bandwidth (in the case of chatGPT). LLMs are a little different, as I’m decently sure they implement back-propagation as part of the technologies definition, just like KNN.

    This became a little longer than I had hoped, it’s just a fascinating topic. I hope you don’t mind that I went into more detail than necessary, it was mostly for the random passersby.


  • AI is a very broad term, ranging from physical AI (material and properties of a robotic grabbing tool) to AI (as seen in many games, or in a robotic arm to calculate path from current position to target position) and to MLAI (LLM, neural nets in general, KNN, etc.).

    I guess it’s much the same as asking “are vehicles bad?”. I don’t know, are we talking horse carriages? Cars? Planes? Electric scooters? Skateboards?

    Going back to your question, AI in general is not bad, though LLMs have become too popular too quick and have thus ended up being misunderstood and misused. So you can indeed say that LLMs are bad, at least when not used for their intended purposes.