Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only if we can have an AI assisted tax fraud transaction monitoring system conne…
rdc_lnfmjuz
G
Johnny as you said automation bias is one of the biggest and most important aspe…
ytc_Ugziql6Di…
G
“A.I. is making life and death decisions and deciding people’s access to food, e…
ytc_UgxiabBxW…
G
both are ai generated
In the first video the ai generation forgets that glass…
ytc_UgxDxshKI…
G
I’ve noticed how whenever I use my intellect to sound smart, ChatGPT is always a…
ytc_UgzHrjBuz…
G
As Isaac Asimov pointed out :
The more discoveries, inventions and technologies…
ytc_UgxRdbRwK…
G
I asked at if the owner can see our chats, ai said ..well in shorter terms…yes…
ytc_UgzyRS-lZ…
G
C est génial.
Dommage que je ne connaîtrai pas leur démocratisation. J'espère qu…
ytc_UgyKQDkfU…
Comment
Both misalignment threats can be controlled already, but it wont happen. 1) Make narrow AI. Meaning AI should be designed for specific tasks/jobs. AI designed to cure cancer does not need to be able to drive cars. 2) What humans should be able to control the AI? All humans should be able to control AI. It should be possible for every human to turn off or pause AI. This would therefore ensure AI could not be used for war or malfeasance. If every human could command an AI to "stop" we would have to learn to agree before moving forward.
youtube
AI Jobs
2025-12-09T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYprC1ziMrkn4hrIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyPSz4AxSRdfFUSenZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwpA3_KykX96P6OEaR4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJu7vfFoX-t75cOPp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzFnWysNlsKWqZA2w94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzKOIGKs-lM7jYYoBl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx-spP_36iPC97PvcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyanLyfio9SUg6jEoB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJz2QcF0Wv02b1-YJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxowrJ1LFHAOFdQnph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]