
Software Engineering
Keploy is a python application that allows you to deploy your ssh public key to remote systems without having to remember all the little things, like file permissions.
Features: Push ssh public key to remote(s) Remove ssh public key from remote(s) Replace old public key with a new one on remote(s) Can target all hosts in un-hashed known_hosts file
Vagrant is a tool for building and distributing virtualized development environments.
By providing automated creation and provisioning of virtual machines using Oracle’s VirtualBox, Vagrant provides the tools to create and configure lightweight, reproducible, and portable virtual environments.
Apdex is a numerical measure of user satisfaction with the performance of enterprise applications. It converts many measurements into one number on a uniform scale of 0-to-1 (0 = no users satisfied, 1 = all users satisfied). This metric can be applied to any source of end-user performance measurements. If you have a measurement tool that gathers timing data similar to what a motivated end-user could gather with a stopwatch, then you can use this metric. Apdex fills the gap between timing data and insight by specifying a uniform way to measure and report on the user experience.
The index translates many individual response times, measured at the user-task level, into a single number. A Task is an individual interaction with the system, within a larger process. Task response time is defined as the elapsed time between when a user does something (mouse click, hits enter or return, etc) and when the system (client, network, servers) responds such that the user can proceed with the process. This is the time during which the human is waiting for the system. These individual waiting periods are what define the "responsiveness" of the application to the user.
OpenTSDB is a distributed, scalable Time Series Database (TSDB) written on top of HBase. OpenTSDB was written to address a common need: store, index and serve metrics collected from computer systems (network gear, operating systems, applications) at a large scale, and make this data easily accessible and graphable. Thanks to HBase's scalability, OpenTSDB allows you to collect many thousands of metrics from thousands of hosts and applications, at a high rate (every few seconds). OpenTSDB will never delete or downsample data and can easily store billions of data points. As a matter of fact, StumbleUpon uses it to keep track of hundred of thousands of time series and collects over 100 million data points per day in their main production cluster.
Imagine having the ability to quickly plot a graph showing the number of active worker threads in your web servers, the number of threads used by your database, and correlate this with your service's latency (example below). OpenTSDB makes generating such graphs on the fly a trivial operation, while manipulating millions of data point for very fine grained, real-time monitoring.
Last week I wrote about performance testing Open Site Explorer. But I didn’t write much about how and why to collect the relevant data. In this post I’ll write about the tools I use to collect performance data, how I aggregate it, and little bit about what those data tell us. This advice applies equally well when running a performance test or during normal production operations of any web application.
I collect three kinds of data:
system performance characteristics client-side, perceived performance server-side errors and per-request details
In this talk, Khalid of 2bits.com, Inc., Inc will talk about a how to scale a Drupal web site with the following statistics.
3.4 million pages per day peak 92 million page views per month 189,650 page views per hour peak 840,000 visits on peak day 22.96 million visits per month 52,747 visits per hour peak So far, this is the highest traffic a Drupal site gets that we heard of.
What is amazing is that this web site runs on a single mid range server ...
We will discuss how we:
How to tune the LAMP stack for optimal performance How to make Drupal performant, yet keep things simple and maintainable How to monitor the entire hardware and software stack Lessons learned, do's and don'ts
Redisql is a lightweight SQL server AND Redisql is built on top of the NOSQL datastore redis, supports redis data-structures and redis commands and supports (de)normalisation of these data structures (lists,sets,hash-tables) to/from SQL tables. Redisql can also easily import/export tables to/from Mysql for Data-warehousing. Redisql is not only a data storage Swiss Army Knife, it is also extremely fast and extremely memory efficient.
Speed is achieved by being an event driven network server that stores ALL data in RAM and achieves disk persistence by using a spare cpu-core to periodically log data changes (i.e. no threads, no locks, no undo-logs, serving data over a network at RAM speed) Storage data structures w/ very low memory overhead and data compression, via algorithms w/ insignificant performance hits, greatly increase the amount of data you can fit in RAM Your hard disk's swap is utilised when your data can no longer fit in RAM. In this mode, performance is not negatively effected, if rarely-used data sits idle in swap. Redisql can use 100% of your RAM for storage and still provide disk persistence. Optimising to the SQL statements most commonly used in OLTP workloads yields a lightweight SQL server designed for low latency at high concurrency (i.e. mindblowing speed).
memtrack is a PHP extension that tracks memory consumption in PHP scripts and produces reports (warnings) when memory usage reaches certain levels set by the user.
This is a simple UI to put on top of the data that mk-query-digest outputs. It let's you browse the query report in a more readable fashion. The aim is to display all the information from the report in a readable, navigable way. This tool does not add anything to the mk-query-digest utility itself. It simply displays the data that the utility generates.
A bugfix without a test is an anti-fix. You heard me – right up there next to the anti-christ himself. After committing the bugfix, the developer thinks their ‘Done’ when in reality they’ve just introduced a new bug (and more complexity) into the system.
Bugs are incredibly interesting facts. They are indicative of that rare species – source code that is actually used (remember the Urban Myth that only 20% of your source code is actually used on a daily basis?). If a customer has taken the time to try and get something done with your application, the least you can do is write tests for any bugs they happened to come across. The test is your unspoken agreement with the end-user that this particular bug won’t happen again.