Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is complete BS. Current "AI" is just statistical text stories. Its just an …
ytc_Ugw3TNox_…
G
I am personally watching AI ruin education. I'm a community college student, and…
ytc_UgzWMXXbs…
G
Removing the human element makes ALL the difference. There is an enormous psycho…
ytc_UgzLgHGb3…
G
I had a little 'chat' with GPT-4 recently, it sensed that I was probing for weak…
ytr_UgwNII1Cy…
G
My college professor actually allows us to use ChatGPT for all assignments in hi…
ytc_UgwvbLiLj…
G
I think in 6 months you will see most free ai things gone, and a lot of public f…
ytc_UgzhZ_gQP…
G
Because all the song nowadays are the same formula for every genre 😅. Nothing is…
ytc_Ugxmgs6R0…
G
Yeah I wanted it a few weeks ago by chance as it was on the telly. Strange watch…
ytr_UgypMEpLx…
Comment
I'd pose a rebuttal. Logic is an extension of consciousness. If you can empower AI with logical decision making, then you come to a point where its no longer simply obeying rules, its learned from a set of training points about logical decisions to tackle a problem, and it can backtrack and use alternative logical approaches to come to a conclusion.
That is to simplify:
If Gertle's theorem was a set of rules, a cookbook on how to choose which rule to follow. Suddenly we are no longer defining a set of operation, but a framework for intelligent decision making. It can interpolate from these sets of rules new rules that are an extension of its logical paradigm.
youtube
AI Moral Status
2025-05-01T02:5…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxz50LXjYTtB1FzpyZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzilpeyGJ78d7mTMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyDn86fCh7oryqlsx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVPegwHESEt4i9hx14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMNkU-_jZtgNKppcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSf5rLer9dX9G4YO14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz0T_4sosBJgFxCh5p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-s8TE2fWq4rrFfAZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxe5UvkRnIeMlqiD654AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyX4-QD8KoeKmVxltZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}]