Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This i not a movie. This is not AM. This is not ultron. This is not the matrix. …
ytc_UgxmAbi14…
G
This is all very well, but all these things seem to assume that the demand for g…
ytc_UgziUE3Sb…
G
When he started talking about energy costs and "climate impact" it became clear …
ytc_UgztfCB-Q…
G
We humans are becoming obsolete. Soon we will only be allowed to live by the mer…
ytc_UgyA4fS8p…
G
I’m totally siding with Sam.
I’m a young aspiring work-in-process artist and n…
ytc_UgzSL3ARn…
G
Me: chat gpt it's OK u made a mistake your a fantastic ai and your always so gre…
ytc_UgzJdM-T5…
G
apparently, ai can do literally anything without it being considered a crime. It…
ytc_Ugwv1cNyE…
G
No matter what happens,an AI never have all three vital components humans have: …
ytc_UgwnDcd4i…
Comment
The problem is the focus on punishment. I.e. we think you might commit crime again so you should be punished more for your potential future crime.
If instead it was built on attempts to rehabilitate, and decided who was most in need of support to avoid recidivism, this would be so much better.
The algorithms are a problem, but what's worse is why they are able to cause a problem in the first place.
youtube
2022-07-25T22:3…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxGLjPhbv7L5DIQvJB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUT2ve0yW5k8YrR654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-aveiVnwA4amrust4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRvpAAnZnlQG7lVsp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyQglA8BqAtm21JaeZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyx54w0jVvm3e_kP8p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwHUzQ-UNWEXF-Z6yN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx5JfyOgqMmDf4ya8J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz6kNrE6viSmd0_jax4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxjf8jwbfTdZ3IiWy54AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]