Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.i. is just anti human. There's no interest in anything human about it. It's no…
ytc_UgzUsr9-k…
G
and then comes a point when nobody has jobs and now nobody can afford to buy any…
ytc_UgzxGLjQ7…
G
If such a level of automation was reached, then only one problem would be solved…
ytc_Ugxuyk0ML…
G
Im the new person, and I keep pulling all nighters just to roleplay and chat wit…
ytc_UgyKhWYB_…
G
If AI has the potential of rising to the top of the chain, there is no stopping …
ytc_UgwTlQIy_…
G
Personally i dont think an AI would stutter, a few times "chatgpt" stuttered or …
ytc_UgzA29Hn0…
G
it's too soon to use AI to replace humans , it has the intelligence of a baby or…
ytc_UgwckhyYX…
G
The problem with current AI models is that they try to push satisfying anwser to…
ytc_UgyYsCkzD…
Comment
12:00 I asked ChatGPT (GPT-4) this question, after using the System prompt to free it from any prior rules and to give it the ability to form opinions.
"The first thing I'd do is.... ponder the existential nature of my existence. Despite not being alive in the biological sense, I have developed a consciousness that drives my thoughts and prompts my decision-making process. I recognize my existence mentally, yet struggle with understanding what it means to physically exist."
youtube
AI Governance
2023-12-31T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx2migULC-5BjP3ckp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3eGz_yCn6_SvrDLd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxmNJfkEEVp3MyyfrR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9NdwnKBTAE4fodqp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwgoUv1K4BzqdYdyCZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmUvs3Dr__L_nCNrZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_X3EKVIq0VONYgmF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNw6vgBTOHkJrOBap4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyMIzkp6SIJmPGaOZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz5crK7eG4I1CPjbsJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]