Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m actually doing this right now and I don’t consider myself a writer or an aut…
ytc_Ugz0ff3P3…
G
@tidalshooter9778 As for AI, these systems are actually massively overrated and …
ytr_UgwQ1zl8i…
G
You should never ask this question to an AI. Now it’s in her programming. Wow wh…
ytc_Ugy_T5W5C…
G
@bakaraider The discussion here has been better than most, but I don't think we …
ytr_UgyaO-YnO…
G
Almost all of they seems to deliberately avoid talking about UBI or HBI which wi…
ytc_UgxbT5-e9…
G
No matter how many sensors or LiDAR units you attach to a car, it won't necessar…
ytc_UgycHJGui…
G
I love it whenever something happens inside of a story and I can connect it back…
ytc_UgyjunvW7…
G
the pro A.I people aren't presenting solid arguments 😅 there just like being opt…
ytc_UgzT9KyYX…
Comment
If AI was a human child; you wouldn't educate, socialize or build knowledge of the world without guard rails or guidance. Even though AI can process vasts amounts of data, it still requires an ethical framework. It's limited understanding is based on data not knowledge. AI weights information by repetition of instances, not by veracity or nuance. Limited AI is very useful for data crunching and repetitive tasks, but no authority should be making life and death decisions based the understand of AI in its present form. AI needs better teachers.
youtube
AI Bias
2023-04-03T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxKFFUjd4n2cxlQCBd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgybX6J4c7r566giKtF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzleYsjwHahkO_I4i54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvporRjCPu8iTh3gV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDK8E-yYc8BwkGnpF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfLrHeep6CDSwXqbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxvSOs6Pqvndp1T41h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzzqbGwbG1vnCKyskB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGgJ0MoqW0OnVsbnx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTK2fcWtA0V94UT1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]