Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why are they blocking AI tools? GPT4 can't solve anything super complex, but it …
rdc_l57hzb9
G
This robot already is better than most western women. You looked at it for more …
ytc_Ugwj-PA3e…
G
Race wasn't included in the prompt so the results didn't give a care for it. I d…
ytc_UgzoRlU5w…
G
I know how AI would be most dangerous...but if i say it, someone will use my ide…
ytc_Ugyl7Xs8N…
G
Anybody that writes a story is an author, a good author using an AI to help writ…
ytr_UgwXitAvp…
G
Oh boy, oh girl, another Y2K, mayan calendar event, or 2020 agenda? I love thes…
ytc_UgyZhBY3e…
G
Ever since the word 'ai' was mentioned, and info came out, I've always thought p…
ytc_UgwN1V9_M…
G
not trying to sound insensitive here but the "ai" part of this story is likely f…
ytr_UgxjEA9rx…
Comment
As with any ethical question, it can go both ways. But I want to say that if developing artificial intelligence leads us to the point in human history when we are so unchallenged and comfortable that we lose our existential authenticity/humility as a species, then something is wrong. I mean that if the day comes when all the work is done for us by a.i., the character of the human outlook on the whole will change. That change might be positive to a degree, but as with all instances of progress, something will inevitably get lost in the scuffle. Since the Industrial Revolution, human outlook and values have changed dramatically, and not always for the better.
I'm surprised you didn't mention the V.I. and A.I. distinction in Mass Effect. That's pretty interesting stuff, a way to avoid any ethical misgivings some people may have or developing a.i.
youtube
2013-11-09T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UggQA9piQKPPHHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugi7lWCqY9ksDngCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiOrCe094MKjHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgibYzQAmZAn1ngCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgiWTc5cGlkjXngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghImfg7p-LkC3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjrmcSECEPYmngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugid59dtjYGUrXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UggZCNMF-BBN4HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgjGEKM8R0Z5f3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]