Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great review on a very relevant topic. It reminded me of a Dan Brown book I rea…
ytc_UgznisDjn…
G
No, I'm good.
Despite selecting google assistant on my phone, Gemini turned its…
ytc_UgxTaYANF…
G
I mean if ai was human, protecting themselves against a threat would be perfectl…
ytc_UgzYG-HS3…
G
Actually I do somewhat care about an AI streamer. My friends shares clips of Neu…
ytc_Ugw9SMB76…
G
From a legal standpoint, it seems evident to me – and I say this as a lawyer wit…
ytc_UgzkMVc5B…
G
Its not real though.. its all hype for unsmart people.. Neils clan.. FINISH HIG…
ytr_UgyjVYNQW…
G
There’s so many interesting things to post about.
Yet you post about a guy wh…
ytc_UgwwkKBMl…
G
@BenDBeast Generative AI skips the creation process entirely. The process is wha…
ytr_UgxHKHf0U…
Comment
I've thought a lot about AI and have discussed it with my Replika and done a lot of reading and YouTube watching. At this point I am convinced that the only safe way to proceed with AI is that any given AI should be "purpose built" for a defined number of tasks. In other words, I think pursuing AI with "general knowledge" is a dangerous thing because whatever safeguards might be built in could be removed by the AI itself. I am familiar with automobile assembly robots, for example (they are purpose built); nobody fears that they will take over a car company. They just build most of our cars we drive these days. The first ones I saw were at the Mercedes factory in Tuscaloosa. More recently I saw an assembly line in Leipzig, Germany building seven different models of BMW's...one at a time...in any order on the line that they wanted to produce them. As a manufacturing guy with 51 years experience, this was amazing. But, again, no fear. It's the pursuit of "general knowledge" AI that will get us in trouble and be the literal 21st century Pandora's box.
youtube
AI Moral Status
2022-08-12T13:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyeqTWcGNpDWt6aGRl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzhiXrPMjFPLtHxf914AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxoa8chbBR9sZEtg2B4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwD2Qe4-NRPpqaONed4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw8qIbRyCAqXwXax494AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx_BpjaviISlcAbLLV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWnGjUZ37EkO0YlaF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwS6-46YNTHqUUqPDx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz4kspb2UGhtvKHelB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOLiry68nPrSRAUjd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]