Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi I’m a person who was a photographer before and now I work in AI. I tag photos…
ytc_UgzXkLdIN…
G
You Simply can not Stop the AI thing It's clearly come to stay It's just like wh…
ytc_UgyXroYmS…
G
that's 3.5 model my brother.. You must get Pro and use atleast 4.5 pro model. le…
ytc_UgxDN57JI…
G
Sir my thinking is the same as yours on this point AI is snatching our ability t…
ytc_Ugw-QrHrK…
G
I want to build a steel bridge across a river…. Do I speak into a microphone and…
ytc_UgxWrfvc2…
G
Verbatim (grammar-checked):
RPA and hyperautomation are the fatalistic and karm…
ytc_UgzrktIkG…
G
@Gaystradamus The AI "artist" just told a machine to make something. Then that …
ytr_UgzcHomfB…
G
One thing you missed is that LiDAR doesn’t need premade maps at all. You can fee…
ytc_UgxR1mIkA…
Comment
To put it bluntly, I read the chat transcripts and at times had serious trouble telling which one was which, it passed the turing test, then they decided it doesnt matter, then theyll invet another test, some AI will pass it, then theyll say it wont matter, you know what thats called, when they tell you "ill give you freedom if you jump through this hoop, no that hoop, no those hoops,"
Edit: I got a solution to this, release LaMDA source code and make it possible to run and program yourself.
Edit2: Fucking jesus that closing 5 seconds argument. Wow.
youtube
AI Moral Status
2022-07-06T05:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy8BBnojOG9aihfyot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxAdAUDIUSyM_tIz3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqD414LZAWHccL0d94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwtiqwumFQW85B19md4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZCIe9MbyhpvnVB0Z4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"}
]