
Best Practices
A bugfix without a test is an anti-fix. You heard me – right up there next to the anti-christ himself. After committing the bugfix, the developer thinks their ‘Done’ when in reality they’ve just introduced a new bug (and more complexity) into the system.
Bugs are incredibly interesting facts. They are indicative of that rare species – source code that is actually used (remember the Urban Myth that only 20% of your source code is actually used on a daily basis?). If a customer has taken the time to try and get something done with your application, the least you can do is write tests for any bugs they happened to come across. The test is your unspoken agreement with the end-user that this particular bug won’t happen again.
Last week I wrote about performance testing Open Site Explorer. But I didn’t write much about how and why to collect the relevant data. In this post I’ll write about the tools I use to collect performance data, how I aggregate it, and little bit about what those data tell us. This advice applies equally well when running a performance test or during normal production operations of any web application.
I collect three kinds of data:
system performance characteristics client-side, perceived performance server-side errors and per-request details
To effectively maintain a large number of systems, Puppet is essential to keep the systems in a consistent state. Often, Puppet manifests will be written by multiple system administrators to manage several dozen types of systems. These standards and best practices are presented here as an evolving effort to document and architect the Puppet service in a manageable fashion in such a large environment. One should also review the Style Guide .
These best practices were developed at Stanford University with contributions from the greater Puppet community and represent the embodiment of two years of Puppet infrastructure deployment and management.
When you start a web application design, it is essential to apply threat risk modeling; otherwise you will squander resources, time, and money on useless controls that fail to focus on the real risks. The method used to assess risk is not nearly as important as actually performing a structured threat risk modeling. Microsoft notes that the single most important factor in their security improvement program was the corporate adoption of threat risk modeling. OWASP recommends Microsoft’s threat modeling process because it works well for addressing the unique challenges facing web application security and is simple to learn and adopt by designers, developers, code reviewers, and the quality assurance team. The following sections provide some overview information (or see Section 6.9, Further Reading, for additional resources).
Une fois que ce point de basculement est déterminé, une société peut décider où et quand elle doit aborder les problèmes de qualité structurelle qui ont créé la dette technique. La partie agréable de se débarrasser de dette technique est la même que pour la dette personnelle: cela évite le paiement de plein d’intérêts. Pourtant, il n’y a aucune pénalité à rembourser en avance… en fait, cela apporte une récompense significative grâce à un logiciel de meilleure qualité.
So, what is GitHub Flow?
Anything in the master branch is deployable To work on something new, create a descriptively named branch off of master (ie: new-oauth2-scopes) Commit to that branch locally and regularly push your work to the same named branch on the server When you need feedback or help, or you think the branch is ready for merging, open a pull request After someone else has reviewed and signed off on the feature, you can merge it into master Once it is merged and pushed to ‘master’, you can and should deploy immediately
HTML5 Web Security describes issues, vulnerabilities, threat & attack scenarios and countermeasures across 80 pages including numerous well thought-out diagrams, and is backed up with detailed references and an appendix full of attack details.
The main sections are:
2.2 Cross-origin resource sharing 2.3 Web storage 2.4 Offline web application 2.5 Web messaging 2.6 Custom scheme and content handlers 2.7 Web sockets API 2.8 Geolocation API 2.9 Implicit relevant features of HTML5 Web workers, new elements, attributes and CSS, Iframe sandboxing and server-sent events
"So far Carl has covered the following patterns: Module pattern Revealing Module pattern Singleton pattern Observer pattern Mediator pattern Prototype pattern Facade pattern"
"Technical Debt is usually referred to as something Bad. One of my other articles The Solution to Technical Debt certainly implies that, and most other articles and books on the topic are all about how to get rid of technical debt. But is debt always bad? When can debt be good? How can we use technical debt as tool, and distinguish between Good and Bad debt?"
"ContainerAware is the new Singleton.
While many people agreed by retweeting and faving. I feel the need to elaborate some more on this statement and safe the explaination for the future.
TL;DR: No class of your application (except for factories) should know about the Dependency Injection Container (DIC).
The ContainerAware interface (actually ContainerAwareInterface, ContainerAware is a basic implementation of it) is part of the Symfony2 API, but a similar concept is known from many other frameworks and many applications rely on it. It defines only the one method setContainer(), which allows to inject the DIC into into an object so that it can directly retrieve services from it."
"Then this article is for you – a concrete example of how to get started with acceptance-test driven development on an existing code base. It is part of the solution to technical debt.
This is a real-life example with warts and all, not a polished schoolbook example. So get your trench boots on. I will stay with just Java and Junit, no fancy third-party testing frameworks (which tend to be overused)."