Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Chuchel-hh6hqas someone who works in the civil engineering field, no , because …
ytr_UgwQrm9UV…
G
The problems with AI will arise when someone develops and integrates a simulated…
ytc_UgxJveLO3…
G
@serazvi5387youre not quite understanding how no human therapist can compete wi…
ytr_UgyqGIhOB…
G
Zach these MFS wanna go to bed, push out this short now to their algorithm…
ytc_UgwMwabib…
G
@adamjutras7024 never said an artist can own a style. My point was that these ai…
ytr_UgyKTYRkn…
G
There is such a thing as ancient AI. What ever exists now has already existed be…
ytc_Ugyoenn_8…
G
I just keep feeding many specific questions about my simple research....to A.I. …
ytc_Ugy8KFd2q…
G
Why does she go to Dune as a way to understand the "AI" cult when Dune is alread…
ytc_UgxAPUL6V…
Comment
Only half joking here, but one important step in AI seems to have repeatedly been skipped in it's creation: The Robot Laws. Now, I am not just talking about the actual fictional robot laws, but parameters that can avoid, prevent, or solve harmful unintended consequences stemming from creating AI in the first place. Put them in place from the start of the creation, and there would be at least some bias and danger avoided. If someone's pet project seems to be not functioning with those parameters in place, it's time to start over until a path is found where the project both works and runs the parameters of protection. We seem to always do that in reverse, trying to plug leaking holes and dangerous systems after they are already causing chaos. If we introduced stop-gap measures firstly, the AI itself could be trained to sniff out those possible bad results, route itself around them, and function efficiently. It's what they appear to want to do, but they are only as good at this as a human would be. First, we humans have to want to stop all the harming we are doing ourselves. JMO.
youtube
AI Responsibility
2023-11-06T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyF6IfSN3VbGBs3U3Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHqeUcFeWwY_BeopZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMvzdj84njrG-ZJWp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6j839QqQYsUOGkkV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxmiDdYRLsKvrPR9nd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyg82mTX_D-Hay4UlV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwkRT1zJf9ESSvQWqZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvAuyoxsti5Znfu994AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxrFUMp8QWZZT2cPqd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUCuVRmsEFXUkvsUx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]