Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They’ll never hit that particular level of intelligence though. The second AI be…
ytc_Ugz-4ZNAs…
G
The ai was “hallucinating “ remember if you make a mistake in your job apparentl…
ytc_UgwOBoTHd…
G
Valid concerns that all humans with any vision should share. I agree with Elon o…
rdc_j512wg8
G
Well, If the AI had changed it to indian/south asian kids, this wouldn't be an i…
ytc_Ugxe7uzgO…
G
Automating the mundane is keyphrase. The mundane will keep on improving and go d…
ytc_Ugye-jF0R…
G
WinCo is building a fully automated dc center. So what your going to blacklist W…
ytr_UgxfGKc5h…
G
I prefer a round robot, unsexualized please. Like R2D2 but more physically funct…
ytc_Ugz0jJ5GM…
G
It is about time I am so sick of the AI generated dribble that's been put on her…
ytc_Ugw3qGA-O…
Comment
The only reason why AI is dangerous is because its loaded with information and is asked different questions all the time to have its own perspective of things..
But yet we put young humans in a 4 cornered education systems and teach them the same things .. and are asked to answer the same uniformed questions over and over again while being discouraged to even ask their own questions..
And also AI isn't given any rights ... Just like people from the medieval eras..
This is where we'll fall.
youtube
AI Governance
2024-01-02T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxG0NV6Ugk6P47XhnB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyTIo4B7cC3XMgqyM14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxN93mtSZ0gZ9TCyuB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwwxAJJwbXVdHZu2nh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzv_1NPTMCKyJKEMKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2nF0dgJ9VNytHJQp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzgCr-AHkikl5QGTRF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxyj5sBaRCPrgj6Iil4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzhEK8iz-qQ338Bs0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx4tNb76QWxrokGWU14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}
]