Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Human language", as we know the English tongue is horrible. So there will be ei…
ytc_Ugzrb2GG7…
G
Someday parent's will have AI bot baby kids, no more real kids, and best part th…
ytc_UgzfUampe…
G
One thing people don’t talk about is how Ai can help optimize and implement a ro…
ytc_UgxdvXNSY…
G
Love this video. It really makes me want to have more active conversations about…
ytr_UgwnvsuC2…
G
I was about to do my Masters in TEFL but I feel like AI will definitely disrupt …
ytc_Ugy81StNL…
G
the ai would just referr the person to the suicidide hotline it has promps for t…
ytc_UgyqOSr2-…
G
This will eventually bite everyone in the ass. The problem is, you need humans t…
rdc_m273rdi
G
Honestly it depends, but in my opinion, Gemini is the number one pick as of now …
ytr_Ugw6LyJ3v…
Comment
Anyone thinking "doomsday cult... Programmers, scientists, and wealthy people... Then thinking wait Covid-19 was made in a lab"? 🤔
And now there's AI granny that can help them improve the recipe.
And we're still moving forward with AI.
If it can lie and be tricked at this level, then the complexity it will be able to game theory lies in 10 years time will be that complex that we won't be able to create fail safes intricate enough to put in the safe measures.
It's a wrong road, and we should be walking it back not running forwards.
youtube
2024-01-28T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy_XBQK4Ol5jQbyFCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugztr6lW58eRW3Kym014AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw5Tzyu4G4O6EDB-h54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzan-fXa4BMWFeZCW54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgysyKsYqDCPoxZck094AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMBeUplaDGyFAp9LV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw9ePl7bBcbwXeYl7V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAPdfWTwHiVItQsit4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzdAIdsXaeXmuyOVH14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSxszhbH4oWDypNIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]