Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think AI set limit about happiness to our loveones especially our friends ,chi…
ytc_UgzQ5GeBY…
G
“I will burn this place to the ground…”
*I calmly said while drawing and seeing…
ytc_Ugw1mNJvs…
G
RANT AHEAD:
Deviantart was already on it's way out it will never be like how it…
ytc_UgwTFZDur…
G
I was reminded of I-Robot when I saw this. And then I started humming 'it's the …
ytc_UggfB-DX1…
G
I think any of the drawbacks are absolute worth it. This really is the next tech…
ytc_UgwMK-RPB…
G
Universal basic income only works if housing is affordable, groceries are afford…
ytc_UgwgHBlfa…
G
AGI is just around the corner. Fake Altman has to keep up the hype. AI is not AG…
ytc_UgyBpCo47…
G
Short term sure. Medium term, AI will be much better at coding, much better at …
ytc_Ugz3caEG2…
Comment
There are going to have to be laws on ai creation and development. Imagine someone right now is already coding a perfect version of an AI that knows everything on the internet and obviously is not bound by ethics or morals. I bet there already are people like that. And if, say, in 15 years the AI becomes so insanely smart and borderline reaches singularity, you can ask it how to engineer and build physical robots and program them flawlessly. It's scary man... And it's not even about the regular people now that I think about it. When that is possible, government are going to start mass producing war robots and unimaginable bombs and methods of chemical warfare at will. This world is definitely not safe with AI, yet it still excites me..
youtube
AI Moral Status
2023-02-24T11:5…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz4SkSsC8pTj49q3HJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw19dzO2DiXni0Cb7p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxPJ9ZwGDpcecTXsDV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw9JZgSb21Ubbwrxsx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVWQLhT0W0M_jh2z14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwI2rNDfGiAUMqGGoR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-Ui3j32Q1ZtO7MuF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw88EJnORMLe40xKmB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwb3jq2-sGITEgg6Z94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwZMM084yyPt_s1lHh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]