• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle





  • Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.

    A quick rundown of what it entails and why it might matter in the US:

    What is it?

    • The EU AI Act is a comprehensive set of rules aimed at ensuring AI systems are developed and used ethically, with respect for human rights and safety.
    • The Act targets high-risk AI applications, including those in employment, healthcare, and policing, requiring strict compliance with transparency, data governance, and non-discrimination.

    Key Takeaways:

    • Prohibited Practices: Certain uses of AI, like manipulative behavior manipulation or unfair surveillance, are outright banned.
    • High-Risk Regulation: AI systems with significant implications for people’s rights must undergo rigorous assessments.
    • Transparency and Accountability: AI providers must be transparent about how their systems work, particularly when processing personal data.

    Why Does This Matter in the US?

    • Brussels Effect: Similar to how GDPR set a new global standard for data protection, the EU AI Act could influence international norms and practices around AI, pushing companies worldwide to adopt higher standards.
    • Cross-Border Impact: Many US companies operate in the EU and will need to comply with these regulations, which might lead them to apply the same standards globally.
    • Potential for US Legislation: The EU’s move could catalyze similar regulatory efforts in the US, promoting a broader discussion on the ethical use of AI technologies.

    Emotion-tracking AI is covered:

    Banned applications: The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.


    Sources:




  • Here’s a summary:

    Unbait ttle: “Ukraine Gains Upper Hand in Electronic Warfare Against Russia”

    In the ongoing conflict, Ukraine has effectively countered Russia’s electronic warfare (EW) capabilities. Initially at a disadvantage, Ukrainian forces have developed their EW strategies to successfully disrupt Russian electronic operations, crucially affecting the course of battles. The article highlights the significance of EW in modern warfare and underscores the urgency for the US military to revitalize its EW capabilities, drawing lessons from the Ukrainian experience.

    Summarised with ChatGPT.






  • You raise a fair point. Hiding downvotes could help avoid bandwagon negativity, as users may be less inclined to pile on additional downvotes. However, I still believe transparency should take priority over these concerns.

    Showing the full picture - both upvotes and downvotes separately - allows users to more accurately judge content quality and community sentiment. I think we should trust users to make reasoned judgments, rather than hide data from them. A slight potential increase in negativity seems a small price to pay for maintaining transparency and accountability on Lemmy.

    Edit: I do agree that displaying a relative percentage, as you mention, could serve as a compromise.