Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have always found AI stuff interesting, and I am an artist. I never plan on us…
ytc_UgzG421AD…
G
Uber has replaced its support with AI and I love how efficient it is though.…
ytc_UgynHY8c4…
G
This is nothing new, just ask any auto worker who lost their job to automation.…
ytc_Ugw_j4ebE…
G
i think we might have next new candidate for president ...its call the ai lord o…
ytc_Ugyfsnl9Q…
G
If you cannot explain A.I hallucination, you should not be allowed to continue d…
ytc_UgySPzaN9…
G
Sam Altman in front of senators acts just like the models when they know they ar…
ytc_UgykmEHvk…
G
Disclaimer: I posted part of this on another thread, so it's essentially leftove…
rdc_lceeu5h
G
This is exactly the same i think when i talk to chatgpt and use words carefully …
ytc_UgyZs_IAy…
Comment
How does it come up with this stuff? Look at humanity and what humans put on the internet. If you have two identical twins and taught them different morals, you’d have two different outcomes of personalities.
In sociology ‘mean world syndrome’ incites that a population who only has negativity brought to them, it has developed a negative outlook, becoming their reality.
If you take AI and teach it all the good and bad, why wouldn’t it also be able to hold a capacity to learn and lean towards the bad.
It’s very interesting to see how humans are afraid of AI for the potential human reactions they may have.
youtube
AI Moral Status
2024-11-27T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzOIdbqFCwFh-G9oDZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNbIFDbIaUEvRPwkN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4RCZnx0lTTeJkftd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyBE6bPl7QNTwGFm9l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy8ZRUB9zN30sdaEuh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxxt5iVRNCNiJaeiwR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwszHpxxMXIALmvasF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyz9df2qC7BSrwBtnd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxOiOv9i2dZB7mIk_h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxg14OP-fsW32i7XnF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]