LiarLiar employs cutting-edge AI to analyze micromovements, heart rate, and subtle cues in body language to detect deception.
Heart rate fluctuations Leveraging Remote Photoplethysmography, we detect subtle heart rate variations.
Body language By analyzing body language minutiae, we catch non-verbal signs commonly associated with lying.
Emotion detection We decode facial expressions and micro-emotions, offering insights that could be missed by the human eye.
Voice consistency Fluctuations and inconsistencies in voice pitch, tone, and speed often indicate stress or deception.
Choice of language People often unconsciously alter their speech patterns and word choices when being untruthful.
Attentiveness Our system monitors levels of attentiveness during conversations.
We turn the articles you care about into easy-to-digest audio convos using AI.
Sounds awesome, doesn’t it. But what does it mean you ask?
Good question.
And to be honest, the fastest way to really understand the magic of it all is to try it. So go ahead and add an article to your recast or listen to an existing one in your recast app. And don’t forget to send us some feedback afterwards.
The possibilities for using ChatGPT for scientific purposes have so far been quite limited due to the likelihood of hallucination.
Now, with ResearchGPT, a promising further development appears on the horizon: the coupling of ChatGPT with the scientific database of consensus.
My first attempts already looked quite good. The articles that ResearchGPT refers to are not „invented“ by ResearchGPT. Unfortunately, I haven't yet figured out how to go directly to the articles, but copy&paste to Google Scholar works.
In any case, I will test ResearchGPT more intensively in the near future when I write a scientific article.
Have you already tried ResearchGPT?