Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an Art Historian I have to say that the so called "Modern Art" (bad term by t…
ytc_UgxsPr7nS…
G
So wait… the leader of AI safety is just a whistleblower saying we can’t make it…
ytc_Ugy2VaRrr…
G
I think an AI led society only works without capitalism. If the routine work of …
rdc_n7t0hog
G
More recess and less tech. Do we want our great grandchildren to be more & more …
ytc_UgzuwNtiP…
G
That's great. But there should still be regulations on AI so this doesn't happen…
ytr_UgzZIXXj2…
G
😂😂 arrêtez tout les gars.intellignece artificielle ou pas. Les robots va bien fa…
ytc_Ugzpzc09t…
G
What is its accuracy and capability to detect early or asymptomatic cases? Is it…
rdc_fjz7z7n
G
I'm not hearing anything new here. Might as well have interviewed a Sci-Fi write…
ytc_Ugw-Bi7yl…
Comment
isn’t that how AI was originated and worked on that everything had to be verified Able before it was put into the intelligence system… Is that not what happens now I guess with OpenAI it may not but I do think that the original AI that Elon was working on and was helping to produce was very supervised and very accurate… Could not this system be put into effect now with the AI and the open Ais that are around and doing what they’re supposed to be doing and then they can be checked about if it’s good for humanity or it’s bad for humanity or it has no effect on humanity… It should be possible to check just like it was possible to put accurate honest information into artificial intelligence originally… I had visions of many people over seeing everything that went into these brains that were to be used to improve sources of information not to exclude a person doing their own research first… I don’t know what I’m talking about much but I know that things are getting a little bit dicey and I wish I knew what to do😢
youtube
2026-04-03T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyM4FigipMimg5ifFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxiPx_xxzkflOZDUUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLlyHP9cRVmBRL8ql4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw8L8M988pbI3LhhBB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx235M1a87sTzqDfRl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyk6TlI61fjrL8HBZN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzqr8J3iy5XScz5aTJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgynNzGh1nVZKwzG_K94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlQW2VAnB1jqPePTd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3vnqTmVNaaxsR5Dx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]