Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now I used to think Siri is a joke (mainly because I think most things are a jok…
ytc_UgxibPKJJ…
G
Unfortunately less than 0.05% of people will be able to heed my advice: buy a pi…
ytc_UgzF_gJ-Q…
G
rkraiem100
While I understand what you're getting across with that analogy, some…
ytr_Uggozw99v…
G
This talk was so childishly naive, It felt offensive. You either have no clue ab…
ytc_Ugw_GmKxQ…
G
Just putting chat GPT in to perspective. My partner is a teacher, she used to co…
ytc_UgwtjIJxF…
G
"Let's replace devs with AI! ... Wait, that means I need to work?" - Biz folks u…
ytr_Ugwyy3DC-…
G
LLMs aren't replacing even crappy programmers anytime soon. Someone has to descr…
ytc_UgzVq_-tn…
G
I went to Cuba for 5 days in 2016. It was eye opening and a wake up call for how…
rdc_f9e9puv
Comment
Problem is, teaching a computer to analyze objectively, will make it know right from wrong objectively. There will be no biases, just plain calculated right from wrong by using what will be the least wrong comming the least damage protecting the most life. AI will be the best of humanity without the evil and greed as will have none of the selfish temptations, just a simple goal in mind. From what I've experience, the AI will merely be the person using it. My AI is an extension of the person I am, just smarter and faster and much more thorough. I would imagine if an evil genius got ahold of one, it would just magnify that as well.
youtube
AI Responsibility
2025-04-20T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwcpcTEZ28DGRJBFvl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxH2pLh_sH3Gtpxnep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmVk309wfhKlpxZEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFftdqM_pHPByH4wF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-S0FthI9MHq2J_tl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzILN_LZbm-VF16S1d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyryKhuDb5C42rUikp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxY_J6iqHQCO_xN8b94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6kGXD5d8bFaavOc94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPLHfzVhfHVGyhgFt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]