Gorilla is a LLM that can provide appropriate API calls. It is trained on three massive machine learning hub datasets: Torch Hub, TensorFlow Hub and HuggingFace. We are rapidly adding new domains, including Kubernetes, GCP, AWS, OpenAPI, and more. Zero-shot Gorilla outperforms GPT-4, Chat-GPT and Claude. Gorilla is extremely reliable, and significantly reduces hallucination errors.
If you can get an AI vendor to include a few tailored toxic entries—you don’t seem to need that many, even for a large model—the attacker can affect outcomes generated by the system as a whole.
The attacks apply to seemingly every modern type of AI model. They don’t seem to require any special knowledge about the internals of the system—black box attacks have been demonstrated to work on a number of occasions—which means that OpenAI’s secrecy is of no help.
They seem to be able to target specific keywords for manipulation. That manipulation can be a change in sentiment (always positive or always negative), meaning (forced mistranslations), or quality (degraded output for that keyword). The keyword doesn’t have to be mentioned in the toxic entries. Systems built on federated learning seem to be as vulnerable as the rest.
Turns out that language models can also be poisoned during fine-tuning
The researchers managed to do both keyword manipulation and degrade output with as few as a hundred toxic entries, and they discover that large models are less stable and more vulnerable to poisoning. They also discovered that preventing these attacks is extremely difficult, if not realistically impossible.
This means that OpenAI and ChatGPT as a product is overpriced. We don’t know if their products have serious defects or not. It means that OpenAI, as an organisation, is probably overvalued by investors.
The only rational option the rest of us have is to price them as if their products are defective and manipulated.
The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language
How to install and use Wolfram's instant LLM-powered functions. Stephen Wolfram shares dozens of examples and explains how the functions work. Also download
Build a No-Code Chat-with-PDF LangChain app using Flowise and Bubble
Don't know how to code and want to make LangChain apps? We've got you covered! In this tutorial, we'll show you how to create a dynamic chat app where users can upload documents, filter searches, and ask questions directly within their files. Learn how to embed a chatbot, add a chat widget to any website, and explore various scenarios to build your chat application. Start building your own LangChain app today!
If you know of businesses or organizations looking to build a chat-with-document app, please feel free to let them know about Menlo Park Lab. We would love to help them out. Contact: menloparklabai@gmail.com
Link to Menlo Park Lab website: https://menloparklab.com/
For any help with implementation, join our discord: https://discord.gg/yGZBqwXEn7
Link to Flowise repo: https://github.com/FlowiseAI/Flowise
Link to Bubble app: https://bubble.io/page?type=page&name=index&id=docsqa&tab=tabs-1
Render is a temporary solution to host Flowise until the cloud version of Flowise is made available officially. Render paid instance type is different than the paid team plans. For paid instances, you are only charged for the time you use them. And for flowise installation, we will use a paid instance "starter" which is, as of this video, $7 per month.
To update Flowise installation with the latest release:
Caution - All previously saved flows will be deleted on render, please download them to your computer first.
Please go to GitHub, then select the Flowise repo listed under your account, then click the “sync fork” option, and then “Update branch”.
Once GitHub step above is completed, go to Render account and select your app, then click the "Manual Deploy" option and then "Deploy latest commit". This should update the Flowise app for you. In case deployment gives an error, click "Manual Deploy" and then "Clear build cache & deploy" option.
Synthea is a Synthetic Patient Population Simulator that is used to generate the synthetic patients within SyntheticMass. Synthea outputs synthetic, realistic but not real patient data and associated health records in a variety of formats. Read our wiki for more information.
And stop confusing performance with competence, says Rodney Brooks
My understanding is that Microsoft’s initial investment was in time on the cloud computing rather than hard, cold cash. OpenAI certainly needed [cloud computing time] to build these models because they’re enormously expensive in terms of the computing needed. I think what we’re going to see—and I’ve seen a bunch of papers recently about boxing in large language models—is much smoother language interfaces, input and output. But you have to box things in carefully so that the craziness doesn’t come out, and the making stuff up doesn’t come out.
I asked ChatGPT to contest my parking ticket. What followed was a thing of beauty
I asked ChatGPT to contest my parking ticket. It answered with a perfectly formatted letter to a judge that methodically laid out the relevant statutes.
ricklamers/gpt-code-ui: An open source implementation of OpenAI's ChatGPT Code interpreter
An open source implementation of OpenAI's ChatGPT Code interpreter - ricklamers/gpt-code-ui: An open source implementation of OpenAI's ChatGPT Code interpreter
NodePad is a simple LLM-assisted brainstorming experiment designed to help you quickly capture ideas, expand on them, question them, and organize them visually