From #redis on FreeNode (made anonymous and more succinct):
Q: Is redis a good choice if I need a solution to order my result and filter it, or should I use mysql for this job? I'm developing an interface to show Nagios status information and for this I need to order by state (up, down, unknown) or state since and so on. I played with Memcached, then found Redis, but I'm not sure if it is a good idea to use Redis for a job like this.
An answer lead to this being implemented in MySQL instead. So the original poster added a desire to protect MySQL from 12K inserts (without specifying the period).
12K inserts per minute is barely doable for a single MySQL server with the table being indexed for the queries specified. However, with doubt about the period, a MySQL solution will likely need to be modeled to match the write and read patterns as well or performance will suffer in time. So data modeling should be performed.
A: For monitoring data, as with all data in Redis, model the data as it will be accessed. There is highly likely a tool that already provides the feature set of monitoring, likely using a timeseries data store (see Circonus instead of Nagios). However, this seems like a good case for understanding how to model data when using Redis.
I'd imagine the following queries: (1) What host/service are in the state {up|down}? (2) What hosts are hosting service? (3) What is the data for a host/service, such as {host, service, os_version, service_version, state, last_monitor_time, last_up_time, last_down_time}? (4) What are the last n status monitor messages received for a host/service? In short, 1) is a couple sets for {all|down} with {up} being derived by diff, 2) is a set, 3) is a hash, 4) is a list of finite size left pushing + right popping on insert.
In longer form, (Using #{key_part} to indicate templated values)...
1) could be modeled as sets "mon:hostservices:all" and "mon:hostservices:down" with "mon:hostservices:up" being achieved by a set difference and the host/service being either added or removed from the "mon:hostservices:down" when a monitor event is processed. Items in the set are in the form of the identifier used in the item's data (see 3).
2) is modeled similar to "...all" in 1 and achieved by adding to the "mon:hostservices:#{service}" when a monitoring event is processed, adding the host as the item value. The interesting bit is how host/services are removed. A reaper process could be employed to iterate over all host/services, removing "mon:hostservices:all" and "mon:hostservices:down" and "mon:hostservices:#{service}" set entries that have a last_monitor_time (see 3) that is expired (longer ago than considered active).
3) could be modeled as a hash "mon:hostservices:#{host}:#{service}" with the fields set straightforwardly from the monitor event, with the exception of last_up_time or last_down_time which the field is chosen based on whether the event is an up or down state indicator.
4) could be modeled as a list "mon:hostservices:#{host}:#{service}:events" with finite size, pushing the message on the left-hand side while popping the right-hand side of the set.
The above model is intentionally simplified. Additional queries may also be desired, such as service availability should be modeled as timeseries counters for downtime. Timeseries counters are may be implemented using zset with a partitioned timestamp as the member and the score being the counter value. The key for the zset in this case would be "mon:hostservices:#{host}:#{service}:counters:downtime". The downtime counter would be incremented whenever a monitoring event indicated a downtime and the increment by would be the monitoring period. Again, a reaper may be employed to remove data that is older than desired to be retained.
While this may be a good case for understanding data modeling in Redis, the cost of development and operating an in-house developed solution is highly likely more than the cost of licensing and implementing a monitoring service that provides the desired functionality. The benefit of modeling, though, should help in selection, so is a fruitful exercise.
Sunday, April 5, 2015
Thursday, March 26, 2015
Enjoying Computing, Enjoying Erlang
Nearly 33 years ago I got the spark for computing when I saw my uncle's FORTRAN at work, simulating the flight of bees across tarmacs and the impact of such seemingly trivial things over time. His work was wondrous to me and left a very meaningful effect on my understanding of the importance of information, where to measure it, how to process it, and all the while to work with the knowledge that the smallest of things can have great impact.
Nearly a week ago, I began evaluating Riak and am sparking again from the architecture, the support from Basho, and the elegance of Erlang. Through functional argument matching, functions are simplified, matching on predicates rather than branching on imperative logic. Through modules dependencies are clearly delineated. Through actors, distributed computing is modeled in a manner that any consumer of a (e)mail system should understand. This all comes at some performance cost, but is generally overcome by the greater performance gains of concurrency and simplified exception handling. And, the exercise of optimizing simple recursive functions in a tail-recursive manner is a joyful experience. Built-in profilers bring the joy of chasing code coverage and eeking out higher and higher requests per second a game for fun and profit.
For the yet to be deep into the world of imperative programming, Erlang might bring you the wonder that those digital bees brought me. For any who have had tastes of functional programming, concurrency, and dynamic languages, Erlang may be quite enjoyable. I found the following to be a well written, funny at times, foray into the language:
http://learnyousomeerlang.com/content
Nearly a week ago, I began evaluating Riak and am sparking again from the architecture, the support from Basho, and the elegance of Erlang. Through functional argument matching, functions are simplified, matching on predicates rather than branching on imperative logic. Through modules dependencies are clearly delineated. Through actors, distributed computing is modeled in a manner that any consumer of a (e)mail system should understand. This all comes at some performance cost, but is generally overcome by the greater performance gains of concurrency and simplified exception handling. And, the exercise of optimizing simple recursive functions in a tail-recursive manner is a joyful experience. Built-in profilers bring the joy of chasing code coverage and eeking out higher and higher requests per second a game for fun and profit.
For the yet to be deep into the world of imperative programming, Erlang might bring you the wonder that those digital bees brought me. For any who have had tastes of functional programming, concurrency, and dynamic languages, Erlang may be quite enjoyable. I found the following to be a well written, funny at times, foray into the language:
http://learnyousomeerlang.com/content
Monday, March 9, 2015
MVC from a CRC Perspective
The Model-View-Controller (MVC) pattern maps well to programming an HTTP client interface with persistent data stores as each external system interface requires a decent amount of protocol-specific adapter code which is well focused by the pattern.
While measuring the effectiveness the MVC pattern in focusing code according to component purpose, several anti-patterns have arisen, such as the following:
- Spaghetti View
- Fat Model
- Fat Controller
Instead of delving into the anti-patterns, we should understand first the intent of the pattern and what force(s) cause the pattern to break down, giving rise to the anti-patterns.
The MVC pattern from a Class Responsibility Collaborator (CRC) perspective follows, specifying for a network service, listing classes according to the trace of a client call path:
- Router
- List the business operations of the domain
- Route business operation requests to the appropriate
Controller - Translate InterProcessCommunication (IPC) protocol semantics, it HTTP, into functional programming semantics.
Controller - Receive Client business operation requests
- Validate parameters
- [optional, should be omitted due to optimization for a postcondition-guaranteed system] Validate system state wrt the requested operation
- Coordinate
Model operations - Respond to Client with Resource
or Error Model - Receive CRUD operation requests
- Translate DataStore protocol semantics into a common CRUD set of operations.
- Respond to Client with Resource
or Error View - Receive
Model - Translate
Model into a format fit for consumption
With the MVC CRC listing in mind, developers relatively easily can create, read, update, and delete code in the appropriate class when programming a feature that exposes operations on a single resource. This leads to quick "green field" development as well as small, incremental modifications such as exposing a display_name as a calculated field on a Person resource when the resource had only the constituent parts as fields.
From measures of deviance in primarily MVC-patterned applications, anti-patterns emerge typically as a result of development technical and feature increments that cause strain at points not addressed in the basic model of the pattern, such as:
- multiple underlying data stores become necessary to store a resource
- resource modifications become cohesive and through incorrect channels, ie models interact
- deviance from model operations being restricted to CRUD, but instead expanding to be the set of business operations
A large percentage of deviances from quality MVC-patterned code are the result of basic code handling mechanics which are necessarily made more difficult when employing MVC. For instance, when altering the constraints on a resource, the change must be performed on the model as well as on the controller (and likely on a view or a few). This opposition to the DRY principle is evident within the perceived need that Ruby on Rails and Django code generators fulfilled. No pattern or practice is perfect; as such, it is incumbent on the programmer to know their pattern and perform the alteration with minimal negative impact on the code through the development channel (through QA and up through delivery to the consumers of the solution.)
Other antipatterns in MVC emerge as a result of strains that are outside of the scope of the MVC pattern itself, ie multiple underlying datastores. A general solution that often proves successful is to employ the Façade pattern, creating a ResourceModel (no datastore) that translates one-to-many underlying models into a concise, coherent, and clear solution to an otherwise jumbled solution. The Façade pattern also resolves the strain caused when attempting to resolve how a model can be restricted to support only CRUD operations.
If you would experience MVC antipatterns in the wild, please wrangle them. If you would like to share your experience, please comment.
Wednesday, February 18, 2015
ARP Poisoning mitigation with or without DAI
ARP poisoning, a mainstay of obtaining "man in the middle" position, persists despite advances in computing processing power and memory availability to switch manufacturers. There are however current solutions and theoretical advances in the field which do not require expensive stateful packet analysis.
Cisco Catalyst 6500 series ( http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11_603839.html ) offers Dynamic ARP Inspection (DAI), a feature which best practice recommends to be enabled, but is not enabled by default. DAI basically utilizes ARP table verification, putting some of the additional memory capacity and processing power to use. The feature, however, depends upon DHCP snooping, thus upon the accuracy of DHCP data. Despite the potential hole exposed by the dependence upon DHCP, DAI is likely to mitigate an even vaster majority of ARP poisoning attacks in the wild as the feature has mitigated in the lab.
Alternative solutions may be developed without the dependence upon DHCP snooping, but employing similar tradeoff analysis. For example, subsequent attempts to associate a MAC with an IP Address can be limited as well as progressively throttled.
The question of how to handle when limit is reached beyond the naive "reject at limit" may call for protocol changes. Protocol changes are, for good, practical reason, not only discouraged but a path taken at one's own peril. The number of devices which implement ARP at the current protocol level is vast. While the devices which we personally operate may be easily patched, patch drift is a reality due to the perceived cost/benefit of remaining up-to-date. Nearly all networking devices utilize firmware so support patching, but patch drift of devices is larger than that of personal computers. Therefore, the parallel support for a proposed version and current versions of the protocol have a real cost that is extremely high. IMHO it is thus best to not employ limiting.
Progressive throttling of ARP while strengthening the certainty of a MAC to IP association would require relatively cheap (no control flow) stateful packet inspection, but would severely limit the window of opportunity and potential for ARP poisoning. Without stateful packet inspection, progressive throttling has opportunity to succeed in preferring the actual addressee over the snooping man in the middle.
Throttling and limiting is just one example of techniques from application layer that may be employed in the networking layer. In the meantime, DAI mitigation success should continue to be enabled as per best practices.
Cisco Catalyst 6500 series ( http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11_603839.html ) offers Dynamic ARP Inspection (DAI), a feature which best practice recommends to be enabled, but is not enabled by default. DAI basically utilizes ARP table verification, putting some of the additional memory capacity and processing power to use. The feature, however, depends upon DHCP snooping, thus upon the accuracy of DHCP data. Despite the potential hole exposed by the dependence upon DHCP, DAI is likely to mitigate an even vaster majority of ARP poisoning attacks in the wild as the feature has mitigated in the lab.
Alternative solutions may be developed without the dependence upon DHCP snooping, but employing similar tradeoff analysis. For example, subsequent attempts to associate a MAC with an IP Address can be limited as well as progressively throttled.
The question of how to handle when limit is reached beyond the naive "reject at limit" may call for protocol changes. Protocol changes are, for good, practical reason, not only discouraged but a path taken at one's own peril. The number of devices which implement ARP at the current protocol level is vast. While the devices which we personally operate may be easily patched, patch drift is a reality due to the perceived cost/benefit of remaining up-to-date. Nearly all networking devices utilize firmware so support patching, but patch drift of devices is larger than that of personal computers. Therefore, the parallel support for a proposed version and current versions of the protocol have a real cost that is extremely high. IMHO it is thus best to not employ limiting.
Progressive throttling of ARP while strengthening the certainty of a MAC to IP association would require relatively cheap (no control flow) stateful packet inspection, but would severely limit the window of opportunity and potential for ARP poisoning. Without stateful packet inspection, progressive throttling has opportunity to succeed in preferring the actual addressee over the snooping man in the middle.
Throttling and limiting is just one example of techniques from application layer that may be employed in the networking layer. In the meantime, DAI mitigation success should continue to be enabled as per best practices.
Subscribe to:
Posts (Atom)