Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am so shocked how many people are so in favor of AI art. Seeing comments like …
ytr_Ugw5m3dnX…
G
This is a bit of a weird study to release because the antibody tests being used …
rdc_g9t5ki8
G
Absolutely, I completely agree with Dr. Roman about the importance of AI Safety.…
ytc_UgxELu1Vm…
G
The superiority complex rlly starts to show on you guys when you talk to someone…
ytc_Ugyg50epl…
G
Humans make AI more intelligent than themselves. Why would you do that?
The nex…
ytc_UgwRE8ha_…
G
Blake is an idiot, im testing LaMDA now, not even close to sentient. sentience …
ytc_Ugy3gFezS…
G
Lol no. Elon's just mad because he doesn't run openAI like he wanted to. When th…
ytr_UgzDXPuSY…
G
I 100% agree. The creative decisions, process and the journey to make the art pi…
ytc_UgyWiogAM…
Comment
After all cars are made autonomous (which will come with time) the cars should be able to communicate with one another, so if someone was to jump in front of a car, sensors could detect them and alert the cars to either side of them to move lanes to allow the car with people in front of it to move without harming others. Now, of course this is a very simple solution, but with the geniuses at Google (and no doubt other companies working with them to develop this technology) I think they could design a system that goes beyond anything we could imagine today. Eliminating human error from driving would save countless lives.
youtube
AI Harm Incident
2014-05-26T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxDaMBOSiNJD2siXy94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyN5DcdkN4_CPV9W4N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJgB9bBLAVLTAPPcl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyvkgE0W9UcAxjR8s54AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugimzyh73Mem3ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj-0SjwbhJOkngCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgiSbOcYg8LpjHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjzNHMI5Dl8n3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiKPIaGrg1XJHgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UggDy9xEJWdA5HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]