Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai will not destroy humanity. Greedy humans will have long destroyed the rest of…
ytc_UgwEsNFa3…
G
The parents are the problem.. if you even think of telling chatgpt anything remo…
ytc_UgyNV_GEo…
G
This is what happens when people with Arts degrees and diplomas in management us…
ytc_UgxZmrcuQ…
G
That was literally the foundational problem in the matrix you know… a robot one …
ytr_Ugwv5BHid…
G
The internet amplifies the best and worst of humanity (more often the worst sinc…
ytc_Ugx0D8BYB…
G
Exactly it will learn..you just need the way to describe the need and have the A…
ytr_UgzT8ExKU…
G
Sounds like they did something along those lines, given they said they were usin…
rdc_oi0ni22
G
Crazy how there's backlash but A.I still gets a shit ton of engagement and likes…
ytc_UgyOVhUb0…
Comment
I don't have answers to all ethical dilemmas about self driving cars. But according to my opinion, it is strictly morally forbidden to actively kill someone (and I'd say even just harm him) in order to save yourself, even if in your car there are more passengers, which means there will be a net gain for the human species. But when it comes to passive killing, i. E. Not doing anything to save someone, its the opposite; you shouldn't save someone, even if they are many people, if it would put you at possible bodily risk (unless of course if that's your job, to rescue people in danger).
youtube
AI Harm Incident
2017-07-02T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugjnw_pI28jYpXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghwOGDepVXCWngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj3YY9osWlB4HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UghifWP6y7_ogXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg4ldklSPeo8XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjywnlXpJLqFHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjokRxbpwiSqHgCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghBFWCU-Fp7bngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjOOT8Vua498ngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgirnfINQGNpP3gCoAEC","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]