• 2 Posts
  • 483 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Year, year and half, sometimes 2. Just depends on the pieces that are involved and how long it’ll take to get them all to line up. Family/friends are invited on some vacations, sometimes it’s just us and we have to book it out as far as possible due to demand. We rarely do anything spontaneous. Wife is a major planner.

    Currently we have: July 4th weekend at a campground with church friends September 2024 in state at a mountain cabin for wife’s bday. March 2025 in planning for an Alaskan vacation to visit family and see the Iditarod September 2025 in planning for a family beach vacation.







  • I live in an at will employment state and have been a manager for quite some time. I’ve never seen an employee actually terminated for their protected status race, religion, etc. It’s always been because they had poor performance and/or attendance issues and didn’t want to get better. If you aren’t a solid average then it’s develop up or out. This isn’t my POV, this is the reality of the performance conversations I’ve been involved with. Personal accountability is a major problem these days. If you have none then you won’t have a job for long. The good news is that if you’re solid in those areas then you will be valuable to your employer. This is why so many military applicants get picked up. They have a basis for attendance and completing the mission.

    Having said that, I’m sure you’re correct and discrimination does happen because their employer lied. I just think that it doesn’t happen quite as often as believed. Many poor performers I’ve known have outright lied about why they were actually terminated.


  • They absolutely do not learn and we absolutely do know how they work. It’s pretty simple.

    Generative AI needs massive training sets that represent the kinds of things it’s asked to represent. Through the process of training, the AI learns the patterns in the data and can generate new data that fits within those patterns. It’s statistics all the way down. In the case of a Large Language Model (LLM) it’s always asking itself, “what’s the next most likely word to come after this previous word, and does that next word make sense within the context of the other words in the sentence?” The LLMs don’t necessarily understand a text as a text; that is, as a sequence of ideas unfolding logically but rather as a set of tokens that carry statistical weights.

    https://jasonheppler.org/2024/05/23/i-made-this/