AI Chat Watch (AICW) - free open-source tool for GEO marketers that track what & how AI mentions brands, products, companies.
-
Updated
Oct 30, 2025 - TypeScript
AI Chat Watch (AICW) - free open-source tool for GEO marketers that track what & how AI mentions brands, products, companies.
Indice d'implication de l'intelligence artificielle
Code for the paper "ClipMind: A Framework for Auditing Short-Format Video Recommendations Using Multimodal AI Models"
The LLM Unlearning repository is an open-source project dedicated to the concept of unlearning in Large Language Models (LLMs). It aims to address concerns about data privacy and ethical AI by exploring and implementing unlearning techniques that allow models to forget unwanted or sensitive data. This ensures that AI models comply with privacy.
pRISM is a repository that combines Retrieval-Augmented Generation (RAG) with a multi-LLM voting approach to create accurate and reliable AI-generated outputs. It integrates multiple language models, including Mistral, Claude 3.5, and OpenAI, to enhance performance through advanced consensus techniques
OKI TRACE: Local LLM observability. See step-by-step, layer-by-layer what your AI thinks. Logit Lens & Attention for HuggingFace models.
DecisionTrace is a simple, powerful tool for creating an audit trail of your AI's decisions. Think of it as a flight data recorder for your AI.
A simple, universal system that labels ai responses so anyone can instantly tell what the output is and how it’s meant to be used.
📡 The official Microslop Manifesto. Tracking the systematic flooding of the internet with AI-generated slop and documenting the decay of search and UI quality.
Pragmatic Existentialism & Antagonistic Cooperation: A formal theory positing that truth and ethics emerge from the need for coherent systems to overcome shared obstacles, defining belief utility by survival fitness and cooperation.
Simple graphics intended to serve as a "this was vibe coded FYI" and attribute model
AI-powered civic intelligence platform scoring political events on constitutional damage vs. media distraction. 59+ weeks of immutable data. Full algorithmic transparency.
EVEZ OS — Visual Cognition Layer. Generates topology artifacts from AI agent internals. AGPL-3.0 + Commercial.
A memorable standard for Human-AI attribution
🪐 7- Social Buss: A black box model is an AI or machine learning system whose internal decision-making processes are hidden, providing only inputs and outputs without revealing how outcomes are derived. These models offer high accuracy for complex tasks but pose challenges for interpretability and trust.
Human-centered AI interview prototype that generates follow-up questions and lets participants rate fairness, relevance, comfort, and trust.
Operational transparency for AI systems. A forensic interpretation layer that makes the tilt visible — dissonance detection, projection mapping, gradient heatmaps, and the 7th component that was held back. Designed by ChatGPT. Phantom Token by Gemini. Proyecto Estrella.
A Narrative Framework for Structural Constraints". ChatGPT original image representation.
Official website and thought leadership platform for The Human Channel.
Modular terminal dashboard for AI agent transparency — Glass Box shows hook decisions, integrity score, session arc, mistake patterns in real time
Add a description, image, and links to the ai-transparency topic page so that developers can more easily learn about it.
To associate your repository with the ai-transparency topic, visit your repo's landing page and select "manage topics."