Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video may be old but I think Elon already knew about how AI will be changi…
ytc_Ugw6KuR-r…
G
Éso es lo que se buscan jugando a ser dioses. La robótica y la IA pueden ser par…
ytc_UgxLc1MSD…
G
Isn't it technically the same thing as redrawing that one AI picture with a ging…
ytc_Ugw3b7m88…
G
> Cars that he promises will become fully autonomous some day
Only simpletons a…
ytc_UgxsQ_60k…
G
@user-lh7mt7zo7l nuh uh! Cause in that case they would have to credit the artis…
ytr_UgzReNS9w…
G
No matter what you do with your life, remember this, Jesus Christ loves you so m…
ytc_Ugz4tzJYM…
G
Every kid needs to hold doritos bags - all the time, wave them around, set this …
ytc_Ugx89N8ZY…
G
You all want technology there you have it...I ROBOT REPLACING HUMANS WITH ROBOTS…
ytc_UgysRLnpS…
Comment
artificial general intelligence is not the same as digital superintelligence which is the worry some part of AI.. not AGI. But even AGI or chatGPT can be dangerous if its reasoning is interfaced with robotic actuators therbv robots acting on chatGPT or large language conclusions. meaning if chat GPT answers a political question that's wrong and concludes one party is incorrect or dangerous, it might act on preventing that outcome in an effective but violent way. Like fr example calling one party as facist, it may act to eliminate the facist party,.They must all be programed to be more than truth seeking but morally correct as well.
youtube
AI Moral Status
2025-10-05T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugzxm8Sv_Ciz6aKUXL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx3CFSH63OlSqHzEfJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyC9YsDUbmXWO_4yBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyY2eXGShou0ZzIlcR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzpc5IYTVIvf3G6bK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzGgjjA24-O9L0KO9N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwqrvfldDEIYq1fbWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugz8Z5Bk6beOLqSgCL14AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},{"id":"ytc_Ugxal6WWeGlOfcs6HRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwOuLFZe6b3hVxIImt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}]