mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

12K
active users

#aisafety

3 posts3 participants0 posts today

"Backed by nine governments – including Finland, France, Germany, Chile, India, Kenya, Morocco, Nigeria, Slovenia and Switzerland – as well as an assortment of philanthropic bodies and private companies (including Google and Salesforce, which are listed as “core partners”), Current AI aims to “reshape” the AI landscape by expanding access to high-quality datasets; investing in open source tooling and infrastructure to improve transparency around AI; and measuring its social and environmental impact.

European governments and private companies also partnered to commit around €200bn to AI-related investments, which is currently the largest public-private investment in the world. In the run up to the summit, Macron announced the country would attract €109bn worth of private investment in datacentres and AI projects “in the coming years”.

The summit ended with 61 countries – including France, China, India, Japan, Australia and Canada – signing a Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the AI Action Summit in Paris, which affirmed a number of shared priorities.

This includes promoting AI accessibility to reduce digital divides between rich and developing countries; “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all”; avoiding market concentrations around the technology; reinforcing international cooperation; making AI sustainable; and encouraging deployments that “positively” shape labour markets.

However, the UK and US governments refused to sign the joint declaration."

computerweekly.com/news/366620

ComputerWeekly.com · AI Action Summit review: Differing views cast doubt on AI’s ability to benefit whole of societyBy Sebastian Klovig Skelton

We tested different AI models to identify the largest of three numbers with the fractional parts .11, .9, and .099999. You'll be surprised that some AI mistakenly identifying the number ending in .11 as the largest. We also test AI engines on the pronunciation of decimal numbers. #AI #ArtificialIntelligence #MachineLearning #DecimalComparison #MathError #AISafety #DataScience #Engineering #Science #Education #TTMO

youtu.be/TB_4FrWSBwU

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

I will be attending EAGxPrague conference in May.

I have been a big fan of 80000hours.org for some time and given my background, I am interested in AI safety and also in "AI for good".

This is my first in-person involvement with the effective altruism community. I am well aware that there are some controversies around the movement, so I am quite curious about what I find when I finally meet the community in person.

80,000 HoursYou have 80,000 hours in your career.This makes it your best opportunity to have a positive impact on the world. If you’re fortunate enough to be able to use your career for good, but aren’t sure how, we can help

After all these recent episodes, I don't know how anyone can have the nerve to say out loud that the Trump administration and the Republican Party value freedom of expression and oppose any form of censorship. Bunch of hypocrites! United States of America: The New Land of SELF-CENSORSHIP.

"The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”

The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes. It also adds emphasis on putting America first, asking one working group to develop testing tools “to expand America’s global AI position.”"

wired.com/story/ai-safety-inst

WIRED · Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful ModelsBy Will Knight

Superintelligent Agents Pose Catastrophic Risks (Bengio et al., 2025)

📎arxiv.org/pdf/2502.15657

Summary: “Leading AI firms are developing generalist agents that autonomously plan and act. These systems carry significant safety risks, such as misuse and loss of control. To address this, we propose Scientist AI—a non-agentic, explanation-based system that uses uncertainty to safeguard against overconfident, uncontrolled behavior while accelerating scientific progress.” #AISafety #AI #Governance