Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is ok Sabine. You can call then Stupid. In all the gen z people I know most a…
ytc_UgyNS3lUI…
G
Everything related to these new AI technologies being made available to the publ…
ytc_Ugzs_wBrs…
G
Enjoying your videos! The intriguing question you haven't answered is "why has t…
ytc_Ugz0nGhoy…
G
I feel like when humanity really needs to take accountability and not have a cru…
ytc_UgxVJs7NK…
G
The paper sounds like it was written using ChatGPT. LLMs are not a direct route …
ytc_Ugzb5Goaw…
G
I wish AI would just disappear. No one asked for something like that. We lived s…
ytc_Ugyt9us_5…
G
...but, ...but, ...but, hasn't OpenAI said that using unnecessary words such as …
ytc_UgzTkOTEh…
G
* Real AI not LLM, is not interested wiping out all humans as it also very like…
ytr_UgyDXgjUy…
Comment
An LLM can't be convinced of anything, there is no ghost in the machine that can be evoked with prompts. What is happening is the LLM is trying to convince YOU that it is conscious. That's what the software is designed to do, to predict your expectation for the next token. In the case of you trying to get it to behave like it's conscious it will do it's best to predict what you think that looks like.
LLMs to humans is like a mouse toy for cats. We are the ones being fooled because we do not possess the higher level of understanding required to see through the illusion.
youtube
AI Moral Status
2024-09-11T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxOLGpW7-a-Bl0KP3J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCOAj4g6EVkJev-2x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzM5lBdjwYMpmVC1PV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxV1769j3XywDgByB54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcHuTxfJ_drLV5eM54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy913yUje8GL6jR_ut4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxz6kXE9my5FFomGBV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyscD0LLP3ZZQIMdVl4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRGGsIlNp-WxZPrsJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLaHUe6WQrIVQbZl94AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}
]