Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just remember the Ford Motor Corporation KNEW that the Ford Pinto had a fatal fl…
ytc_Ugy8S-e6R…
G
@rameeziqbal8711 until we get true general AI, creativity is still humanities ba…
ytr_Ugxx4OTBt…
G
@Furebel It really doesn't, if you think the average artist knows how machine le…
ytr_UgxfcIFFZ…
G
The people that fear AI the most also fear the “global order”. Yet somehow they …
ytc_Ugz1JzHYN…
G
LOL what do you want, AI functions on logic and calculations, not on emotions, t…
ytc_UgwlCxYqF…
G
I'm still convinced that Ai devs need a metric called "time to tay" (TtT) to mea…
rdc_l4fsj8s
G
Its engineers and designers feel sympathy for this robot so they priced it up so…
ytc_UgxHyoxzj…
G
People deny death is coming quite soon for all of us and we know our lives are q…
ytc_UgwkqbsoI…
Comment
Generative AI under a RAG model, can be better than human. Because you can restrict its data set, to a source of truth/facts. It can then generate answers from that source of truth.
Meaning - your likely to have the correct answer 99.9% of the time. Assuming the Natural language processing portion was properly trained for the supported large language model.
🤯
youtube
AI Jobs
2024-05-01T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzBOHZezVjUaUP-c594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGV3D-znXnWXJWZGZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwE2l9KW-1b7PhDRSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTfSuakqBPwUOUDiZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzesVjlgnZtA61Yo94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxj546j-i3uc3Wj8SF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxufQ9_uHg_GV-s-p14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzRDqvSP4rPRkASLsd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbRcobRQnjIQDFi5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzBZxGdIV4to6enL9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]