Tuesday, August 18, 2020

Agile Methods Over Time pt1, to the eXtreme

Is Agile Dead? Does this even matter in 2020? What kind of person writes a manifesto?

For the sake of pragmatic memory management, I'd like to have a TL;DR, but what lead the leads who developed the Unified Process before (and several others) to meet and put out the Agile Manifesto is inherently important as iterative methods are delta-driven, requiring one to know what about their current position should carry forward and what should be removed and/or replaced. That knowledge can not be perfect, so experimentation, agility, and a solid foundation in measures which are based on the product of the work, not the work for work itself.

Timeline

from https://www.researchgate.net/publication/275654650_COMBINING_LEAN_THINKING_AND_AGILE_SOFTWARE_DEVELOPMENT_How_do_software-intensive_companies_use_them_in_practice [1]


Methods Along a Path

While my career is not normal, it does mirror a class of developers who have had the persistence and care for the field to not seek early exits or hone their skill in the way of the shiny object at the expense of the clients' outcomes nor measure their own personal 10x without recognizing the value of a team. The key process change Agile methods milestones I experienced with some local color follow:

Hobbyist

On seeing my uncle run a swarming bees near runways model on a computer that occupied the length of a bedroom and requiring the heat to be exhausted, then near after doing many odd jobs to earn enough to buy a personal computer that could fit on a desk, monitor and all, I had found my first financier and customer of software developed, myself. My uncle did not fully appreciate that he inspired me to learn computing, so did not mentor me, but he did connect me with resources including magazines covering this personal computing wave as well as manuals for C, Pascal, and BASIC. At the time, constructing memory models for the 40 keywords of C and graphs of how they could be strung together to form logical and procedural constructs were unconscious acts. Compiling a Pascal program of relatively simple complexity, such as reducing all of the permutations of eight queens placed on a chess board to list only those for which all queens were safe, took twenty-four hours to compile, give or take an accidental power-cord pull. And BASIC provided the much more proximal feedback, though required manual line numbering and GOTOs of various forms. Each experience was repeated with exuberance and each with significant difficulty at first and each difficulty was minimized along an apparent course from the invisible unknown to the approximately rendered visible, known. And my customer was over joyed most of the time.  And when not, my customer told me exactly what I should have done better. And I was eager to please my customer, so I dove right back in to the keywords, well actually not. I drew out pictures of the solution as my customer stated it should look like. I drew out key components of the solution and how they should interact. I listed out sequences and forks in the sequence based on key variables. And then I dove right back in to fix what I had partially succeeded in delivering. With hopes that my customer would be happy with this next development, yet knowing that there should always be the next "could be better".

Waterfall

After doing here-and-there contract work through university, then owning/operating a small data backup solution for local businesses, when I encountered the contract-oriented nature of Waterfall in my first full-time job as a software developer, I found the meeting of deadlines at the expense of maintainability, passive-aggressive needling over details which were in the contract or not or to whatever not-so-specific level they were in the contract, and wait aren't we in the same company why is there a contract?, whether someone will help or not because they are too busy/behind on their obligations, and generally getting past all of this and remembering that we are in an infinite game with our team and our customers, so getting ahead by working harder, worker smarter, but also working together. There weren't many books or blogs...haha the internet was nascent and my use was more reading RFCs and learning the processes of obtaining the necessities to launch an eCommerce business and how to adjust to the search engines emerging algorithms.

Spiral Model

On my second big project that I lead w/i the Waterfall first job, a contractor who had had eight successful stints at the company (each stint requiring a year between to encourage conversion to full-time for such good contractors, yet most remained in the more highly lucrative contractor status) determined to mentor me in the Microsoft version of a marked improvement on the Waterfall model. In the Microsoft form and presentation of the the Spiral Model, the process was likened to and was highly cohesive to the Rapid Application Development tooling. This continuous improvement model was also being studied and touted in academia, ie the Capability Maturity Model from Carnegie Mellon University. While it was a bit of marketing to not only lockstep all of the Visual Studio tool versions to 6.0 (ASP was previously 1.0) but also to tie the software to a specific methodology, there is and was a lot of merit to reducing cognitive load for developers, putting up an easy to grok interface while tweaking and tuning performance in the framework, and making previously enterprise-only operability features of the OS and web services (IIS, not SOAP and later REST) into the developer mainstream, all leading to actual rapid milestone deliveries from prototype through to first full delivery and on to year-over-year updates and in some cases, the next Agile increment, continuous engagement w/ the business unit.

eXtreme Programming

Concurrently emerging as documentation and other up-front quality and maintainability artifacts that were requirements/byproducts of successful Waterfall were waning was a call for technical and business subject matter experts to step up their personal processes and contribute to team processes while also taking their apparent skills and ability to speak, understand, and deliver software solutions that operate in the domain of the business as agents of the business. In hindsight, it is no wonder why they called this eXtreme Programming. There are always aspects of the solution that we need to develop that are not known, some requiring skills that we must learn or teach others in the delivery team rather rapidly, so being "in the fish bowl" where the developer's management and the developer's business customer is awaiting deliverables and can witness little to no output during these capital development efforts, character is required and XP is gained. At the time that I faced this shift, I was working under the lead of the co-author of Use Cases: Requirements in Context (slightly lesser known than Use Cases, but no less useful). Where I had previously lauded the gods of software from Kernighan and Ritchie and Stroustrup through to the Rational Unified Process / UML and "gang of four" design patterns, I was now working with such a titan and he was just like me, attempting to help the business and help others that are doing likewise. Much was learned, but much was unlearned. Where we had in the Waterfall praised the artifacts, we in eXtreme Programming were including the artifacts in the early and often deliverables to help ensure that the entirety of the business understood the business in its current and next increment states. Quite a number of merit-based human-resource strategies and mechanics helped render those who achieved to further refine the game in their favor. Some of us were far outperforming our peers, so much so that many within a team and in some cases entire teams were clearly lost in the wake. There were also various eXtreme Piles of spaghetti that made for some juicy contract work to untangle and refactor and redux entirely on more solid architecture than the poor souls who were hurried under such watchful eyes without such a foundational layer.

To be continued in pt2, from eXtreme to Kanban, aka "Where is that TPS Report?"

[1] Rodríguez, Pilar. (2013). COMBINING LEAN THINKING AND AGILE SOFTWARE DEVELOPMENT How do software-intensive companies use them in practice?. 

Sunday, April 5, 2015

Redis for Service Monitor Visibility

From #redis on FreeNode (made anonymous and more succinct):
Q: Is redis a good choice if I need a solution to order my result and filter it, or should I use mysql for this job?  I'm developing an interface to show Nagios status information and for this I need to order by state (up, down, unknown) or state since and so on.  I played with Memcached, then found Redis, but I'm not sure if it is a good idea to use Redis for a job like this.

An answer lead to this being implemented in MySQL instead.  So the original poster added a desire to protect MySQL from 12K inserts (without specifying the period).

12K inserts per minute is barely doable for a single MySQL server with the table being indexed for the queries specified.  However, with doubt about the period, a MySQL solution will likely need to be modeled to match the write and read patterns as well or performance will suffer in time.  So data modeling should be performed.

A: For monitoring data, as with all data in Redis, model the data as it will be accessed.  There is highly likely a tool that already provides the feature set of monitoring, likely using a timeseries data store (see Circonus instead of Nagios).  However, this seems like a good case for understanding how to model data when using Redis.

I'd imagine the following queries: (1) What host/service are in the state {up|down}?  (2) What hosts are hosting service?  (3) What is the data for a host/service, such as {host, service, os_version, service_version, state, last_monitor_time, last_up_time, last_down_time}?  (4) What are the last n status monitor messages received for a host/service?  In short, 1) is a couple sets for {all|down} with {up} being derived by diff, 2) is a set, 3) is a hash, 4) is a list of finite size left pushing + right popping on insert.

In longer form, (Using #{key_part} to indicate templated values)...
1) could be modeled as sets "mon:hostservices:all" and "mon:hostservices:down" with "mon:hostservices:up" being achieved by a set difference and the host/service being either added or removed from the "mon:hostservices:down" when a monitor event is processed.  Items in the set are in the form of the identifier used in the item's data (see 3).
2) is modeled similar to "...all" in 1 and achieved by adding to the "mon:hostservices:#{service}" when a monitoring event is processed, adding the host as the item value.  The interesting bit is how host/services are removed.  A reaper process could be employed to iterate over all host/services, removing "mon:hostservices:all" and "mon:hostservices:down" and "mon:hostservices:#{service}" set entries that have a last_monitor_time (see 3) that is expired (longer ago than considered active).
3) could be modeled as a hash "mon:hostservices:#{host}:#{service}" with the fields set straightforwardly from the monitor event, with the exception of last_up_time or last_down_time which the field is chosen based on whether the event is an up or down state indicator.
4) could be modeled as a list "mon:hostservices:#{host}:#{service}:events" with finite size, pushing the message on the left-hand side while popping the right-hand side of the set.

The above model is intentionally simplified.  Additional queries may also be desired, such as service availability should be modeled as timeseries counters for downtime.  Timeseries counters are may be implemented using zset with a partitioned timestamp as the member and the score being the counter value.  The key for the zset in this case would be "mon:hostservices:#{host}:#{service}:counters:downtime".  The downtime counter would be incremented whenever a monitoring event indicated a downtime and the increment by would be the monitoring period.  Again, a reaper may be employed to remove data that is older than desired to be retained.

While this may be a good case for understanding data modeling in Redis, the cost of development and operating an in-house developed solution is highly likely more than the cost of licensing and implementing a monitoring service that provides the desired functionality.  The benefit of modeling, though, should help in selection, so is a fruitful exercise.

Thursday, March 26, 2015

Enjoying Computing, Enjoying Erlang

Nearly 33 years ago I got the spark for computing when I saw my uncle's FORTRAN at work, simulating the flight of bees across tarmacs and the impact of such seemingly trivial things over time.  His work was wondrous to me and left a very meaningful effect on my understanding of the importance of information, where to measure it, how to process it, and all the while to work with the knowledge that the smallest of things can have great impact.

Nearly a week ago, I began evaluating Riak and am sparking again from the architecture, the support from Basho, and the elegance of Erlang.  Through functional argument matching, functions are simplified, matching on predicates rather than branching on imperative logic.  Through modules dependencies are clearly delineated.  Through actors, distributed computing is modeled in a manner that any consumer of a (e)mail system should understand.  This all comes at some performance cost, but is generally overcome by the greater performance gains of concurrency and simplified exception handling.  And, the exercise of optimizing simple recursive functions in a tail-recursive manner is a joyful experience.  Built-in profilers bring the joy of chasing code coverage and eeking out higher and higher requests per second a game for fun and profit.

For the yet to be deep into the world of imperative programming, Erlang might bring you the wonder that those digital bees brought me.  For any who have had tastes of functional programming, concurrency, and dynamic languages, Erlang may be quite enjoyable.  I found the following to be a well written, funny at times, foray into the language:
http://learnyousomeerlang.com/content

Monday, March 9, 2015

MVC from a CRC Perspective

The Model-View-Controller (MVC) pattern maps well to programming an HTTP client interface with persistent data stores as each external system interface requires a decent amount of protocol-specific adapter code which is well focused by the pattern.

While measuring the effectiveness the MVC pattern in focusing code according to component purpose, several anti-patterns have arisen, such as the following:
  • Spaghetti View
  • Fat Model
  • Fat Controller

Instead of delving into the anti-patterns, we should understand first the intent of the pattern and what force(s) cause the pattern to break down, giving rise to the anti-patterns.

The MVC pattern from a Class Responsibility Collaborator (CRC) perspective follows, specifying for a network service, listing classes according to the trace of a client call path:
  • Router
    • List the business operations of the domain
    • Route business operation requests to the appropriate Controller
    • Translate InterProcessCommunication (IPC) protocol semantics, it HTTP, into functional programming semantics.
  • Controller
    • Receive Client business operation requests
    • Validate parameters
    • [optional, should be omitted due to optimization for a postcondition-guaranteed system] Validate system state wrt the requested operation
    • Coordinate Model operations
    • Respond to Client with Resource or Error
  • Model
    • Receive CRUD operation requests
    • Translate DataStore protocol semantics into a common CRUD set of operations.
    • Respond to Client with Resource or Error
  • View
    • Receive Model
    • Translate Model into a format fit for consumption

With the MVC CRC listing in mind, developers relatively easily can create, read, update, and delete code in the appropriate class when programming a feature that exposes operations on a single resource.  This leads to quick "green field" development as well as small, incremental modifications such as exposing a display_name as a calculated field on a Person resource when the resource had only the constituent parts as fields.

From measures of deviance in primarily MVC-patterned applications, anti-patterns emerge typically as a result of development technical and feature increments that cause strain at points not addressed in the basic model of the pattern, such as:
  • multiple underlying data stores become necessary to store a resource
  • resource modifications become cohesive and through incorrect channels, ie models interact
  • deviance from model operations being restricted to CRUD, but instead expanding to be the set of business operations

A large percentage of deviances from quality MVC-patterned code are the result of basic code handling mechanics which are necessarily made more difficult when employing MVC.  For instance, when altering the constraints on a resource, the change must be performed on the model as well as on the controller (and likely on a view or a few).  This opposition to the DRY principle is evident within the perceived need that Ruby on Rails and Django code generators fulfilled.  No pattern or practice is perfect; as such, it is incumbent on the programmer to know their pattern and perform the alteration with minimal negative impact on the code through the development channel (through QA and up through delivery to the consumers of the solution.)

Other antipatterns in MVC emerge as a result of strains that are outside of the scope of the MVC pattern itself, ie multiple underlying datastores.  A general solution that often proves successful is to employ the Façade pattern, creating a ResourceModel (no datastore) that translates one-to-many underlying models into a concise, coherent, and clear solution to an otherwise jumbled solution.  The Façade pattern also resolves the strain caused when attempting to resolve how a model can be restricted to support only CRUD operations.

If you would experience MVC antipatterns in the wild, please wrangle them.  If you would like to share your experience, please comment.

Wednesday, February 18, 2015

ARP Poisoning mitigation with or without DAI

ARP poisoning, a mainstay of obtaining "man in the middle" position, persists despite advances in computing processing power and memory availability to switch manufacturers.  There are however current solutions and theoretical advances in the field which do not require expensive stateful packet analysis.

Cisco Catalyst 6500 series ( http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11_603839.html ) offers Dynamic ARP Inspection (DAI), a feature which best practice recommends to be enabled, but is not enabled by default.  DAI basically utilizes ARP table verification, putting some of the additional memory capacity and processing power to use.  The feature, however, depends upon DHCP snooping, thus upon the accuracy of DHCP data.  Despite the potential hole exposed by the dependence upon DHCP, DAI is likely to mitigate an even vaster majority of ARP poisoning attacks in the wild as the feature has mitigated in the lab.

Alternative solutions may be developed without the dependence upon DHCP snooping, but employing similar tradeoff analysis.  For example, subsequent attempts to associate a MAC with an IP Address can be limited as well as progressively throttled.

The question of how to handle when limit is reached beyond the naive "reject at limit" may call for protocol changes.  Protocol changes are, for good, practical reason, not only discouraged but a path taken at one's own peril.  The number of devices which implement ARP at the current protocol level is vast.  While the devices which we personally operate may be easily patched, patch drift is a reality due to the perceived cost/benefit of remaining up-to-date.  Nearly all networking devices utilize firmware so support patching, but patch drift of devices is larger than that of personal computers.  Therefore, the parallel support for a proposed version and current versions of the protocol have a real cost that is extremely high.  IMHO it is thus best to not employ limiting.

Progressive throttling of ARP while strengthening the certainty of a MAC to IP association would require relatively cheap (no control flow) stateful packet inspection, but would severely limit the window of opportunity and potential for ARP poisoning.  Without stateful packet inspection, progressive throttling has opportunity to succeed in preferring the actual addressee over the snooping man in the middle.

Throttling and limiting is just one example of techniques from application layer that may be employed in the networking layer.  In the meantime, DAI mitigation success should continue to be enabled as per best practices.

Monday, July 14, 2014

Naughty Beginnings

Attached is the beginnings of a game that we plan to use to test a new AI being.  This is not the most stringent test, in fact it is a children's game.  At this point, I began coding the solution, but have to run to the lab to put out a fire that the AI being started while throwing a tantrum (and a few technicians).

Get the gist: https://gist.github.com/paegun/8d5950f7c0c523669c8b

Please complete the HumanObserver.display method for a command-line interface (CLI) or HTML interface.

Please don't get distracted by the remaining classes in the overall design.  I was simply blocking out the design ahead.  Once we have the display, I expect the project will iterate rather rapidly.

Thank you.  I look forward to seeing some naughts, crosses, and elegant code.


Background: This is a Ruby adaptation of a interview question that was so popular in the hiring of C programmers that I believe it to be influential in the making of the movie "War Games".

Tuesday, July 8, 2014

Wishing you great success

Given the following definitions of success:
1. the favorable or prosperous termination of attempts or endeavors
2. the attainment of wealth, position, or honors
3. a performance or achievement that is marked by success
4. a person or thing that has had success
5. the difference between realized and expected value
6. the correct or desired result of an attempt
7. the opposite of failure
8. going from failure to failure without loss of enthusiasm
9. the result of a desire for success being greater than the fear of failure
10. the result of rising early, working hard, and striking oil

A. Which is nearest to your definition?

B. Which is the nearest to your ideal boss?

C. Which is the nearest to your ideal colleague?

D. Which is the nearest to your ideal dependency?

E. Which is the nearest to that which you would wish upon your child?

F. What type of person would you attribute to each?  For example, given a definition of "success is the cause of more work" could be attributed to a dependable worker.