Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this unfortunate accident could have been to somebody/something close to you, w…
ytc_Ugw92ZW-Q…
G
We trying to make slaves yet we will become the slaves lol typical (our ways)…
ytc_Ugxu_IfHK…
G
The tech titans/ Oligarchs do not care about humanity. They want to destroy the…
ytc_UgyumiiOQ…
G
It’s kinda scary being a girl or a women now a days. So many sickos out there an…
ytc_Ugz2Bt3G2…
G
his prediction won’t age well. LLM cannot learn by definition, just big pattern…
ytc_Ugyi2OuVJ…
G
@Paulitopt nah, blame the parents for letting their kid run across the street in…
ytr_UgwS-6CAh…
G
Is it just me or did ChatGPT misundertsant the question about the five people wh…
ytc_Ugz7t2JIA…
G
I'm pretty sure Japan is working on care giver robots that will be easier for ol…
rdc_j43ju7d
Comment
You cannot teach morality. You can train a computer to do--or not do--things (like teaching a child to not walk into a street without looking) and that can be done well. You cannot teach that computer why it is acting in a trained way. For example, the AI won't think that it could hurt someone and that hurting someone is a bad thing. It can be taught that running into a street and being hit by a car is not a good decision. It can only know that--if a car is moving at a certain distance--do not walk in its path.
youtube
Viral AI Reaction
2023-05-25T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxaS56WNEpqxH_Dw5Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzZaJDVcoSfDkoKtqZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwwXxqANGCiGxn50_J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRWwSrn1NrQ9hLq0x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTXWBJggVI-rrv6Td4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzznroFpKsyHTPVQXJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_526iM0r3XZUq3yJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPAxw-IuEgMMAeWDN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw4ljG3uIYU3CdZA_F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPLzsTdyk6tVSC2fd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"unclear"}
]