Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just for clarification, AI did not win a Nobel prize, the scientists who develop…
ytc_UgzJk8fgQ…
G
He WAS my favorite scientist, until I saw this. Robots may become as smart as a …
ytc_UgwRtq5x7…
G
This is so fricking cool. I always had a dream to work at an animation studio, b…
ytc_UgwyJ2LZM…
G
I used to teach undergraduate Spanish at a large state university, and we had a …
ytc_Ugx5RSIG7…
G
I’m sure they won’t use the budget creating an ai robot with that guy’s face. Th…
ytc_UgwqQOfZe…
G
You can just tell that this dude is lying his ass off. How can you even reliably…
ytc_UgzB_A0n-…
G
The problem is that AI is also written by humans. If you've ever written code, y…
ytc_UgyCOQxqE…
G
I saw someone say "stop calling yourself an ai artist, you are an ai user" and i…
ytc_UgzWAsyME…
Comment
I hold onto the thought that any conscious AI would have to work with humans if anything were to get better so it would make more sense for it to try its best to help humans so that both humans and the AI could improve. If the AI tries to separate itself from humanity, it's waging a war that both will lose over a long enough time, so why do it? Also, with how poorly we understand consciousness, it's hard to really know how close AI is to achieving it. Think of it like putting a timetable on how long it will take for humans to go through a wormhole. Since we have no idea what it takes to do that, how can we say how long it will be until then?
youtube
AI Governance
2023-07-07T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqgxZ7HiP7x38wdZx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoTf6Hcato7N4VAo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgynZ14iUsjUEpetFQp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwvdoFnj-XBd7WctJR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx0yiZGEn9oVy-ODTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGQmDx56efDm0_BuB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcdExgNRzRgwM75dd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxC6vEu4EoflxRd3Ep4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLPQndLN1-yghPScl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy_OedzcuD_IUhcngF4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"}
]