mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.3K
active users

#interpretability

0 posts0 participants0 posts today
Nicole Hennig<p>The Interpretable AI playbook: What Anthropic’s research means for your enterprise LLM strategy <a href="https://venturebeat.com/ai/the-interpretable-ai-playbook-what-anthropics-research-means-for-your-enterprise-llm-strategy/" rel="nofollow noopener" target="_blank">https://venturebeat.com/ai/the-interpretable-ai-playbook-what-anthropics-research-means-for-your-enterprise-llm-strategy/</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23AI" target="_blank">#AI</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23interpretability" target="_blank">#interpretability</a> </p>
Nicole Hennig<p>The Interpretable AI playbook: What Anthropic’s research means for your enterprise LLM strategy <a href="https://venturebeat.com/ai/the-interpretable-ai-playbook-what-anthropics-research-means-for-your-enterprise-llm-strategy/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">venturebeat.com/ai/the-interpr</span><span class="invisible">etable-ai-playbook-what-anthropics-research-means-for-your-enterprise-llm-strategy/</span></a> <a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techhub.social/tags/interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretability</span></a></p>
JMLR<p>'Causal Abstraction: A Theoretical Foundation for Mechanistic Interpretability', by Atticus Geiger et al.</p><p><a href="http://jmlr.org/papers/v26/23-0058.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v26/23-0058.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/abstraction" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>abstraction</span></a> <a href="https://sigmoid.social/tags/interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretability</span></a> <a href="https://sigmoid.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a></p>
Kevin Dominik Korte<p>Not understanding their models isn't news for AI companies. It's a fundamental part of the underlying technology's architecture. Pretending that we are just a step away from interpretability is simply disingenuous.<br><a href="https://fosstodon.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://fosstodon.org/tags/interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretability</span></a><br><a href="https://www.axios.com/2025/06/09/ai-llm-hallucination-reason" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">axios.com/2025/06/09/ai-llm-ha</span><span class="invisible">llucination-reason</span></a></p>
Nicole Hennig<p>AI interpretability is further along than I thought by Sean Goedecke <a href="https://www.seangoedecke.com/ai-interpretability/" rel="nofollow noopener" target="_blank">https://www.seangoedecke.com/ai-interpretability/</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23AI" target="_blank">#AI</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23interpretability" target="_blank">#interpretability</a></p>
Nicole Hennig<p>AI interpretability is further along than I thought by Sean Goedecke <a href="https://www.seangoedecke.com/ai-interpretability/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">seangoedecke.com/ai-interpreta</span><span class="invisible">bility/</span></a> <a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techhub.social/tags/interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretability</span></a></p>
ESWC Conferences<p>🧪 The Knowledge Graphs for Responsible AI Workshop is now underway at <a href="https://sigmoid.social/tags/ESWC2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESWC2025</span></a>!<br> 📍 Room 7 – Nautilus Floor 0</p><p>The Knowledge Graphs for Responsible AI Workshop aims to explore how Knowledge Graphs (KGs) can promote the principles of Responsible AI—such as fairness, transparency, accountability, and inclusivity—by enhancing the interpretability, trustworthiness, and ethical grounding of AI systems. 📊🤖</p><p><a href="https://sigmoid.social/tags/KnowledgeGraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KnowledgeGraphs</span></a> <a href="https://sigmoid.social/tags/ESWC2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESWC2025</span></a> <a href="https://sigmoid.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://sigmoid.social/tags/fairness" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fairness</span></a> <a href="https://sigmoid.social/tags/trustworthiness" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>trustworthiness</span></a> <a href="https://sigmoid.social/tags/Interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Interpretability</span></a></p>
Hacker News<p>Beyond the Black Box: Interpretability of LLMs in Finance</p><p><a href="https://arxiv.org/abs/2505.24650" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2505.24650</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Interpretability</span></a> <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.social/tags/Finance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Finance</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Research</span></a> <a href="https://mastodon.social/tags/BlackBox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BlackBox</span></a></p>
Steven Carneiro<p>Circuit tracing for AI interpretability:<br><a href="https://social.vivaldi.net/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://social.vivaldi.net/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://social.vivaldi.net/tags/interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretability</span></a> <a href="https://social.vivaldi.net/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a> <a href="https://social.vivaldi.net/tags/innovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>innovation</span></a><br>🤖</p><p><a href="https://www.anthropic.com/research/open-source-circuit-tracing" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">anthropic.com/research/open-so</span><span class="invisible">urce-circuit-tracing</span></a></p>
UKP Lab<p>Are LM more than their behavior? 🤔 </p><p>Join our Conference on Language Modeling (COLM) workshop and explore the interplay between what LMs answer and what happens internally ✨</p><p>See you in Montréal 🍁</p><p>CfP: shorturl.at/sBomu<br>Page: shorturl.at/FT3fX<br>Reviewer Nomination: shorturl.at/Jg1BP </p><p><a href="https://sigmoid.social/tags/nlproc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nlproc</span></a> <a href="https://sigmoid.social/tags/interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretability</span></a></p>
Biruk<p>Unlock the Secrets of AI Learning! ????Ever wondered how generative AI, the powerhouse behind stunning images and sophisticated text, truly learns? Park et al.'s groundbreaking study, ‘Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space,’ offers a revolutionary new perspective. Forget black boxes – this research unveils a "concept space" where AI learning becomes a visible journey!By casting ideas into geometric space, the authors bring to life how AI models learn step by step, stripping bare the order and timing of their knowledge. See the crucial role played by the "concept signal" in predicting what a model is first going to learn and note the fascinating "trajectory turns" revealing the sudden "aha!" moments of emergent abilities.This is not a theoretical abstraction – the framework has deep implications in the real world:Supercharge AI Training: Optimise training data to speed learning and improve efficiency.Demystify New Behaviours: Understand and even manage unforeseen strengths of state-of-the-art AI.Debug at Scale: Gain unprecedented insights into the knowledge state of a model to identify and fix faults.Future-Proof AI: This mode-agnostic feature primes the understanding of learning in other AI systems.This study is a must-read for all who care about the future of AI, from scientists and engineers to tech geeks and business executives. It's not only what AI can accomplish, but how it comes to do so.Interested in immersing yourself in the captivating universe of AI learning?Click here to read the complete article and discover the secrets of the concept space! <a href="https://social.mindplex.ai/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://social.mindplex.ai/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://social.mindplex.ai/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://social.mindplex.ai/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://social.mindplex.ai/tags/Research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Research</span></a> <a href="https://social.mindplex.ai/tags/Innovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Innovation</span></a> <a href="https://social.mindplex.ai/tags/ConceptSpace" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ConceptSpace</span></a> <a href="https://social.mindplex.ai/tags/EmergentCapabilities" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EmergentCapabilities</span></a> <a href="https://social.mindplex.ai/tags/AIDevelopment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIDevelopment</span></a> <a href="https://social.mindplex.ai/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a> <a href="https://social.mindplex.ai/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://social.mindplex.ai/tags/DataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataScience</span></a> <a href="https://social.mindplex.ai/tags/FutureofAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureofAI</span></a> <a href="https://social.mindplex.ai/tags/Interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Interpretability</span></a></p>
Nicole Hennig<p>Dario Amodei — The Urgency of Interpretability <a href="https://www.darioamodei.com/post/the-urgency-of-interpretability" rel="nofollow noopener" target="_blank">https://www.darioamodei.com/post/the-urgency-of-interpretability</a> <a class="hashtag" href="https://bsky.app/search?q=%23AI" rel="nofollow noopener" target="_blank">#AI</a> <a class="hashtag" href="https://bsky.app/search?q=%23Anthropic" rel="nofollow noopener" target="_blank">#Anthropic</a> <a class="hashtag" href="https://bsky.app/search?q=%23interpretability" rel="nofollow noopener" target="_blank">#interpretability</a> </p>
Nicole Hennig<p>Dario Amodei — The Urgency of Interpretability <a href="https://www.darioamodei.com/post/the-urgency-of-interpretability" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">darioamodei.com/post/the-urgen</span><span class="invisible">cy-of-interpretability</span></a> <a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techhub.social/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a> <a href="https://techhub.social/tags/interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretability</span></a></p>
LLMsDecoding the Mind of Machines Recent Advancements in Large Language Model Interpretability Contin...<br><br><a href="https://medium.com/@datumdigest/decoding-the-mind-of-machines-9cf8f16ae2b9?source=rss------machine_learning-5" rel="nofollow noopener" target="_blank">https://medium.com/@datumdigest/decoding-the-mind-of-machines-9cf8f16ae2b9?source=rss------machine_learning-5</a><br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/large-language-models" target="_blank">#large-language-models</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/machine-learning" target="_blank">#machine-learning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/interpretability" target="_blank">#interpretability</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llm" target="_blank">#llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/artificial-intelligence" target="_blank">#artificial-intelligence</a><br><br><a href="https://awakari.com/pub-msg.html?id=LSWGk5fexEkPC6i3rfvQR6T4kEq" rel="nofollow noopener" target="_blank">Event Attributes</a>
Semantic-SearchDo text embeddings perfectly encode text? 'Vec2text' can serve as a solution for accurate...<br><br><a href="https://thegradient.pub/text-embedding-inversion/" rel="nofollow noopener" target="_blank">https://thegradient.pub/text-embedding-inversion/</a><br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Interpretability" target="_blank">#Interpretability</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/LLM" target="_blank">#LLM</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/NLP" target="_blank">#NLP</a><br><br><a href="https://awakari.com/pub-msg.html?id=71WoOtQi6KpLVPiChoFTXZtakyG" rel="nofollow noopener" target="_blank">Event Attributes</a>
Reverse-EngineeringCircuit Tracing: A Step Closer to Understanding Large Language&nbsp;Models Reverse-engineering large ...<br><br><a href="https://towardsdatascience.com/circuit-tracing-a-step-closer-to-understanding-large-language-models/" rel="nofollow noopener" target="_blank">https://towardsdatascience.com/circuit-tracing-a-step-closer-to-understanding-large-language-models/</a><br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Machine" target="_blank">#Machine</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Learning" target="_blank">#Learning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Interpretability" target="_blank">#Interpretability</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Llm" target="_blank">#Llm</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Neural" target="_blank">#Neural</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Network" target="_blank">#Network</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Transformer" target="_blank">#Transformer</a><br><br><a href="https://awakari.com/pub-msg.html?id=46LTLgCiXYBE1ayxAP9fneZlG1A" rel="nofollow noopener" target="_blank">Event Attributes</a>
LLMsHow Does Claude Think? Anthropic’s Quest to Unlock AI’s Black Box Large language models (LLMs...<br><br><a href="https://www.unite.ai/how-does-claude-think-anthropics-quest-to-unlock-ais-black-box/" rel="nofollow noopener" target="_blank">https://www.unite.ai/how-does-claude-think-anthropics-quest-to-unlock-ais-black-box/</a><br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Artificial" target="_blank">#Artificial</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Intelligence" target="_blank">#Intelligence</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/interpretability" target="_blank">#interpretability</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/reasoning" target="_blank">#reasoning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Anthropic" target="_blank">#Anthropic</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Explaining" target="_blank">#Explaining</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/LLMs" target="_blank">#LLMs</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Claude" target="_blank">#Claude</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/3.5" target="_blank">#3.5</a><br><br><a href="https://awakari.com/pub-msg.html?id=4cpLbEQ6Jwx7HJYT9CTJarhspcW" rel="nofollow noopener" target="_blank">Event Attributes</a>
LLMsTracing the thoughts of a large language model Tracing the thoughts of a large language model In ...<br><br><a href="https://simonwillison.net/2025/Mar/27/tracing-the-thoughts-of-a-large-language-model/#atom-everything" rel="nofollow noopener" target="_blank">https://simonwillison.net/2025/Mar/27/tracing-the-thoughts-of-a-large-language-model/#atom-everything</a><br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/anthropic" target="_blank">#anthropic</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/claude" target="_blank">#claude</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/pdf" target="_blank">#pdf</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/generative-ai" target="_blank">#generative-ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llms" target="_blank">#llms</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/interpretability" target="_blank">#interpretability</a><br><br><a href="https://awakari.com/pub-msg.html?id=7uBtw8TtQG13FdtHnJNRF3az5yi" rel="nofollow noopener" target="_blank">Event Attributes</a>
LLMsTracing the thoughts of a large language model Tracing the thoughts of a large language model In ...<br><br><a href="https://simonwillison.net/2025/Mar/27/tracing-the-thoughts-of-a-large-language-model/#atom-everything" rel="nofollow noopener" target="_blank">https://simonwillison.net/2025/Mar/27/tracing-the-thoughts-of-a-large-language-model/#atom-everything</a><br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/anthropic" target="_blank">#anthropic</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/claude" target="_blank">#claude</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/pdf" target="_blank">#pdf</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/generative-ai" target="_blank">#generative-ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/llms" target="_blank">#llms</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/interpretability" target="_blank">#interpretability</a><br><br><a href="https://awakari.com/pub-msg.html?id=ZThKNLaCiOlfpGCRu9AnUX0Oasy" rel="nofollow noopener" target="_blank">Event Attributes</a>
deepseekThe Hidden Risks of DeepSeek R1: How Large Language Models Are Evolving to Reason Beyond Human Un...<br><br><a href="https://www.unite.ai/the-hidden-risks-of-deepseek-r1-how-large-language-models-are-evolving-to-reason-beyond-human-understanding/" rel="nofollow noopener" target="_blank">https://www.unite.ai/the-hidden-risks-of-deepseek-r1-how-large-language-models-are-evolving-to-reason-beyond-human-understanding/</a><br><br><a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Artificial" target="_blank">#Artificial</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/Intelligence" target="_blank">#Intelligence</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ai" target="_blank">#ai</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/ethics" target="_blank">#ethics</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/explainability" target="_blank">#explainability</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/interpretability" target="_blank">#interpretability</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/reasoning" target="_blank">#reasoning</a> <a rel="nofollow noopener" class="mention hashtag" href="https://mastodon.social/tags/AI" target="_blank">#AI</a><br><br><a href="https://awakari.com/pub-msg.html?id=5BnagyUppiXCUs5MsiNw078KCie" rel="nofollow noopener" target="_blank">Event Attributes</a>