No Clocks

No Clocks

2584 bookmarks
Newest
AI Database Design Flowchart Generator
AI Database Design Flowchart Generator
Unlock efficient database design with our AI-powered Database Design Flowchart Generator! Experience fast, accurate, and intuitive creation of complex database schemas. Save time, reduce errors, and streamline your workflow — start designing smarter today!
·taskade.com·
AI Database Design Flowchart Generator
Continue
Continue
Amplified developers, AI-enhanced development · The leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside the IDE
·continue.dev·
Continue
Open Policy Agent
Open Policy Agent
Policy-based control for cloud native environments
·openpolicyagent.org·
Open Policy Agent
Software Architecture Canvas: A Collaborative Way to Your Software Architecture
Software Architecture Canvas: A Collaborative Way to Your Software Architecture
The Software Architecture Canvas is a collaborative technique for elaborating the software architecture playground of a software initiative. With this canvas, you can work efficiently, iteratively, and in a time-saving manner on the software architecture of your software products as a team sport.
·workingsoftware.dev·
Software Architecture Canvas: A Collaborative Way to Your Software Architecture
Architecture Principles: An approach to effective decision making in software architecture
Architecture Principles: An approach to effective decision making in software architecture
Are you a software architect and often find it difficult to make architecture decisions in your team? This article shows you how to use architecture principles to make effective decisions in your team.
A declarative statement made with the intention of guiding architectural design decisions in order to achieve one or more qualities of a system.
If we take a closer look at this definition, we find several interesting parts in this definition. "[...] intention of guiding architectural design decisions [...]" As a software architect or a team of software engineers, you have to deal with and decide on many architecture issues. But how do you decide these questions? Gut feeling? :-) That's is probably not the right approach. As we learn from the Software Architecture Canvas, there are quality goals that are drivers of architecture.
What are the basic characteristics of good architecture principles?
Comprehensible & clear
Architectural principles should be like marketing slogans.
Testable The principle should be verifiable, whether work is done according to the principle and where exceptions are made.
Atomic The principle requires no further context or knowledge to be understood. In summary, architectural principles should be written to enable teams to make decisions: they're clear, provide decision support, and are atomic.
What are the pitfalls of creating architecture principles? What do you think about the following principle 👇? "All software should be written in a scalable manner."
That's why we've adopted in a product team the following architecture principle. "Use cloud services if being lock-in to a particular cloud provider is acceptable."
Whether this vendor lock-in is acceptable depends on several criteria: The effort required to replace this managed service An acceptable lead time for providing alternatives. Let's take a look at an example technological decision we had to make in the past: We needed to evaluate a centralised identity and access management solution for our SaaS products. In addition to meeting the functional requirements, we had two powerful IAM solutions on the shortlist: Keycloak (self-hosted) Auth0 (Managed, cloud service)
Following the defined principle of "Use cloud services if being lock-in to a particular cloud provider is acceptable." we have concluded that a centralised IAM system should be self-managed and not managed by a third-party provider because it's a huge effort to replace a managed IAM product and therefore there is no reasonable lead time to deploy an alternative. In summary, vendor locking wasn't acceptable to us in this case. So this principle efficiently guides us to the right decision.
Example 2: "Prefer standard data formats over third-party and custom formats"
The next principle was about the selection of protocols for service communication. "Prefer standard data formats over third-party and custom formats"
If you have multiple services that need to communicate with each other, the question of protocol and format arises. In the protocol ecosystem there is a fairly new kid on the block: gRPC gRPC (gRPC Remote Procedure Calls) is a cross-platform, open-source, high-performance protocol for remote procedure calls. gRPC was originally developed by Google to connect a large number of microservices. So in our team, the question is: RESTful HTTP vs. gRPC?
The selection of a protocol thus depends heavily on the quality and change scenarios of the services involved. But if you can meet the quality goals and underlying requirements with both options, like RESTful HTTP vs. gRPC, then consider yourself lucky to have such a principle. This principle helped us choose RESTful HTTP over gRPC because RESTful HTTP is a widely accepted standard data format, while gRPC is more of a third-party format. So here this principle speeds up our decision making, which doesn't mean that we don't rely on gRPC in certain cases.
Software architecture may be changing in the way it's practiced, but it's more important than ever.
·workingsoftware.dev·
Architecture Principles: An approach to effective decision making in software architecture
Best Practices for Coding with AI in 2024
Best Practices for Coding with AI in 2024
Learn what steps developers who are using AI coding tools must take in order to ensure the quality and security of their AI-generated code.
·blog.codacy.com·
Best Practices for Coding with AI in 2024
Building the Entrata KPI Scorecard
Building the Entrata KPI Scorecard
This is a description of our Entrata KPI Scorecard project to automate a scorecard showing KPIs from data in Entrata reports. RentViewer now has a connector for the Entrata API. We pulled the Entrata P&L, Box Score and Resident Retention reports from our connector for this project.
·rentviewer.com·
Building the Entrata KPI Scorecard
Intro | Plandex Docs
Intro | Plandex Docs
Plandex is an open source, terminal-based AI coding engine that helps you work on complex, real-world development tasks with LLMs.
·docs.plandex.ai·
Intro | Plandex Docs
Codebuddy: Not just an AI coding assistant
Codebuddy: Not just an AI coding assistant
Codebuddy is revolutionizing coding by providing conversational interaction with your codebase, and multi-file creation and modification in your favorite IDE.
·codebuddy.ca·
Codebuddy: Not just an AI coding assistant
Home
Home
A site reporting on news and plugins for ObsidianMD.
·obsidianaddict.com·
Home
Shiny
Shiny
Shiny is a package that makes it easy to create interactive web apps using R and Python.
·shiny.posit.co·
Shiny
Refactoring notes
Refactoring notes
I worked on a refactor of an R package at work the other day. Here’s some notes about that after doing the work. This IS NOT a best practices post - it’s just a collection of thoughts. For context, the package is an API client. It made sense to break the work for any given exported function into the following components, as applicable depending on the endpoint being handled (some endpoints needed just a few lines of code, so those funtions were left unchanged):
·recology.info·
Refactoring notes
Turn HTTP Traffic into OpenAPI with Optic
Turn HTTP Traffic into OpenAPI with Optic
Capture real HTTP traffic from production or anywhere else, and create OpenAPI from it, for documentation, mocks, SDKs, or contract testing.
·apisyouwonthate.com·
Turn HTTP Traffic into OpenAPI with Optic
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back
In the rapidly evolving field of artificial intelligence, developers are inundated with frameworks and tools promising to simplify the…
Introducing Atomic Agents
Atomic Agents is an open-source framework designed to be as lightweight, modular, and composable as possible. It embraces the principles of the Input–Process–Output (IPO) model and atomicity, ensuring that every component is single-purpose, reusable, and interchangeable.
Why Does Atomic Agents Exist? Atomic Agents was born out of the necessity to address the shortcomings of existing frameworks. It aims to: Streamline AI development by providing clear, manageable components. Eliminate redundant complexity and unnecessary abstractions that plague other frameworks. Promote flexibility and consistency, allowing developers to focus on building effective AI applications rather than wrestling with the framework itself. Encourage best practices by gently nudging developers toward modular, maintainable code structures.
The Programming Paradigms Behind Atomic Agents
The Input–Process–Output (IPO) Model At the core of Atomic Agents lies the Input–Process–Output (IPO) model, a fundamental programming paradigm that structures programs into three distinct phases: Input: Data is received from the user or another system. Process: The data is manipulated or transformed. Output: The processed data is presented as a result. This model promotes clarity and simplicity, making it easier to understand and manage the flow of data through an application.
In Atomic Agents, this translates to: Input Schemas: Define the structure and validation rules for incoming data using Pydantic. Processing Components: Agents and tools perform operations on the data. Output Schemas: Ensure that the results are structured and validated before being returned.
Atomicity: Building Blocks of Functionality The concept of atomicity involves breaking down complex systems into their smallest functional parts, or “atoms.” Each atom: Has a single responsibility, making it easier to understand and maintain. Is reusable, allowing for components to be used across different parts of an application or even in different projects. Can be combined with other atoms to build more complex functionalities. By focusing on atomic components, Atomic Agents promotes a modular architecture that enhances flexibility and scalability.
The Anatomy of an Agent In Atomic Agents, an AI agent is composed of several key components: System Prompt: Defines the agent’s behavior and purpose. Input Schema: Specifies the expected structure of input data. Output Schema: Defines the structure of the output data. Memory: Stores conversation history or state information. Context Providers: Inject dynamic context into the system prompt at runtime. Tools: External functions or APIs the agent can utilize. Each component is designed to be modular and interchangeable, adhering to the principles of separation of concerns and single responsibility.
Modularity and Composability Modularity is at the heart of Atomic Agents. By designing components to be self-contained and focused on a single task, developers can: Swap out tools or agents without affecting the rest of the system. Fine-tune individual components, such as system prompts or schemas, without unintended side effects. Chain agents and tools seamlessly by aligning their input and output schemas.
Chaining Schemas and Agents Atomic Agents simplifies the process of chaining agents and tools by aligning their input and output schemas. Example: Suppose you have a query generation agent and a web search tool. By setting the output schema of the query agent to match the input schema of the search tool, you can directly chain them.
Why Atomic Agents Is Better Than the Rest Eliminating Unnecessary Complexity Unlike frameworks that introduce multiple layers of abstraction, Atomic Agents keeps things straightforward. Each component serves a clear purpose, and there’s no hidden magic to decipher. Transparent Architecture: You have full visibility into how data flows through your application. Easier Debugging: With less complexity, identifying and fixing issues becomes more manageable. Reduced Learning Curve: Developers can get up to speed quickly without needing to understand convoluted abstractions.
Standalone and Reusable Components Each part of Atomic Agents can be run independently, promoting reusability and modularity. Testable in Isolation: Components can be individually tested, ensuring reliability before integration. Reusable Across Projects: Atomic components can be used in different applications, saving development time. Easier Maintenance: Isolating functionality reduces the impact of changes and simplifies updates.
Built by Developers, for Developers Atomic Agents is designed with real-world development challenges in mind. It embraces time-tested programming paradigms and prioritizes developer experience. Solid Programming Foundations: By following the IPO model and atomicity, the framework encourages best practices. Flexibility and Control: Developers have the freedom to customize and extend components as needed. Community-Driven: As an open-source project, it invites contributions and collaboration from the developer community.
The Atomic Assembler CLI: Managing Tools Made Easy
One of the standout features of Atomic Agents is the Atomic Assembler CLI, a command-line tool that simplifies the management of tools and agents.
Manually download the tools or copy/paste their source code from the Atomic Agents GitHub repository and place them in the atomic-forge folder.
The option we will use, the Atomic Assembler CLI to download the tools.
Key Features Download and Manage Tools: Easily add new tools to your project without manual copying or dependency issues. Avoid Dependency Clutter: Install only the tools you need, keeping your project lean. Modify Tools Effortlessly: Each tool is self-contained with its own tests and documentation. Access Tools Directly: If you prefer, you can manage tools manually by accessing their folders.
·generativeai.pub·
Forget LangChain, CrewAI and AutoGen — Try This Framework and Never Look Back