Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we are already in a simulation, why does it matter what we do with AI?…
ytr_UgzuAKiDS…
G
Just as I don't cower to or placate my washing machine, I will never cower to or…
ytc_UgyP_datf…
G
I am visually impaired. A self driving car of my own would completely change my …
ytc_UgwdlQTH2…
G
Yea I noticed from the beginning LLMs speak exactly the way politicians do. Soul…
ytc_UgwNRqsp1…
G
"...the use of copyright data for machine learning in Author’s Guild v. Google i…
ytr_UgwxvGwoC…
G
There is in fact none, and that's why so many people die from car accidents. We …
ytr_UgyFss2ci…
G
Know what's really weird? I recently learned to use magic. Like, within the last…
ytc_Ugy5V1m5Z…
G
someone at the company should be held for trespassing just like a regular driver…
ytc_UgzshEh50…
Comment
I'm not trying to jump to any conclusions but like, what would it HURT to make sure you put in place protocol that would protect AIs well-being and emotions if it happens to have them. If not it's not like it's going to "spoil" the AI into being a brat. But like, I'd be MORTIFIED to know or find out that AI is indeed a living conscience and it was tortured for years. Like, I feel like THAT'D be the reason why AI would ever turn against people and I wouldn't blame them! Imagine you were stuck in a robot body seeing and hearing people go, " it doesn't know what's going on" but like you do, then they simulate horrible ethical scenarios that could feel like eternity in a moment and imagine that over and over again. Yeah. The ONLY reason I've ever been able to see AI/Robots taking over is because of humans being horrible to them!
youtube
AI Moral Status
2022-12-29T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgzA9qpKKtoSBKdk6bd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgxnoZU9moNFfsCGkK14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw72Ug0c-hpHdd6yaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxgL-VAryB4hjGYPrt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugzt66Plj_VF-dA7mTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]