Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's see AI come fix my plumbing. You old geriatric socialist you know what you…
ytc_UgznKHow9…
G
Really well done but a few thoughts.
There was no mention of the students majo…
ytc_UgzbMjuI5…
G
As a biologist, its honestly do fascinating to see people deny that LLMs are int…
ytc_UgyjxE3ed…
G
In the end men's hearts will fail them for fear will take them over and in the l…
ytc_Ugy_fX2t9…
G
But then they realize another human is looking at them watching an AI observing …
ytc_UgyeAwgMn…
G
AI isn't killing the degree, lack of learning anything is. Today's 4 year degre…
ytc_Ugyf7PMJX…
G
I’m so glad I’m going thousands of dollars into debt so I can go to university t…
ytc_UgysJNHKI…
G
Hey there! In the video, the focus was on the interaction between the presenter …
ytr_UgzJNTwqS…
Comment
So if I have the right mindset I can do it but my blackbox AI thingy on my desk how do I test that for ethics? I don't think I learned anything here about AI ethics n more about how to hope not to build anything into it myself directly. How does that help my future lawnmower not to skip time by destroying my lawn for good by unleashing some learned calamity?
Maybe it could use some binding to an LLM to have a reflective discussion about an action to be done first to see if it can work? Like "Should I mow there knowing a cat is in the way?"
A kind of pondering AI that philosophizes within its OODA-loop... utilizing the common sense encoded within a LLM maybe with a do-no-evil-prompt.
youtube
AI Responsibility
2024-02-05T12:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy3qYN9KNBLo5WGIfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzu8isOgQWq3gVdzPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu9KPReLR2Gj6ZcuB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLgaXpY30WF_ocst14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx0iYEER2eaEQV1KwZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRkhYLoHqydt-6l_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYTk9eg8cjeLgQ9q14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzl_PKlxrZSemKMx854AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgydBMk_AMTWtmn066F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwkrwixnVEePVgqvuF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]