Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You really overestimate the AI, no lawyers for 2030? :D you forget that many job…
ytc_UgzRk2UFt…
G
ai image gen is not putting your art in a giant database and collaging them back…
ytc_UgzZSkiuD…
G
He has no idea about reality. Current AI requires that much energy that is not a…
ytc_Ugw60J7yO…
G
Don't know why, but got the idea that about 10% of the computing power will be f…
ytc_UgzSdVfRn…
G
AI isn’t going to just make a song itself unless you tell it to and you tell it …
ytr_UgzQqsGku…
G
Car stops by itself if someone gets Infront and you can't tell the waymo to do t…
ytr_UgyVR0ZiI…
G
Idk about you, but for most of us, art is all about human creativity and express…
ytc_UgwFewy4z…
G
I've been saying this so many times over and over and over since GPT 2 / 3 in 20…
ytc_UgyFDR52J…
Comment
Is Elon really concerned about AI or is he overstating the dangers to promote a regulatory system that can be easily controlled and guided in directions that are favorable to whoever pays the regulatory board enough money or gives enough favors, or presents a compelling enough argument about safety to sway opinions and control the technology? There are plenty examples of regulatory systems that have been corrupted by special interests, money, and corruption that it could be a fair question. Just a question.
youtube
AI Governance
2023-04-18T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxE0SnQJh37qgGPVZt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz8b1tYlzUzY6QyiNJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzk4HZzdQ1AnhtpUAR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugw0000LVimaunbGI614AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwZ5jJhQDxDuBTZl7h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwXAuQNgecSq81vcKN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwRq-e2vBuywid42794AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzSt4iAcovd2crFBYx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},{"id":"ytc_UgxIRgNTtGDMP7Pd_j14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugyz3-KjrqJDRTWqaRh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}]