handling multiple mobile clients: from single controller to single view

Posted in mobile platforms on August 9, 2007 by Adam
You may have noticed that we release many of our apps as both locally installed rich clients AND on WAP simultaneously. For example we have a rich client “CheapGas” application and a WAP CheapGas. Some users have asked “do you really write two separate codelines for each way that you deliver these applications?”.

Well of course not. Almost any app developer would get some codesharing here. But in a previous post I discussed how you could write single controllers and controller actions and still support multiple client types. And do it in a way that is much terser and more elegant than the standards Rails 1.2 “responds_to” method that tends to lengthen and obfuscate your controller methods.

But you can go even further than that to allow literally writing one application, with the same controller AND views to support two different client – something I hadn’t seen done before. Leveraging some of the magic of Rails and RXML templates combined with our own Rails helper functions we’ve done just that. For example, the CheapGas app (which I worked on quite a bit personally as a “demo app” before its release) does this: just one controller and view for the app whether its delivered as WAP or HTML or to the Mobio rich client runner.

The way that we do this is to both use RXML templates and also replace the standard HTML tag helper library (which contains functions for creating dropdown boxes, input fields, forms, and so on) with equivalent functions that generate either WAP code, XHTML tags, or tags to our own runner, depending on what kind of device is accessing it. The example .RXML template for the CheapGas search page (which supports our runner, WAP and even XHTML from just one Rails controller and view) is shown below.

xml=doctype(xml) do |x|
 xml.head { }
 body(x) do |x|
   action="station/list" 
   form(action,x) do |x|
     xml.p 'Search by gas stations by named location or zip code' 
     xml.table do
       fields=[]
       xml.tr do   
               xml.td { xml.text 'Select location '}
               xml.td {  xml << select_tag("location",options_for_select(@locations)) }
               xml.td {xml << link_to("<small>(Manage locations)</small>", :controller=>"location")}
               fields << "location" 
       end
       xml.tr do 
               xml.td { xml.text 'Or supply zip code ' }
               xml.td { xml <<text_field_tag('zip')}
               fields <<"zip" 
       end
       xml.tr do
               xml.td { xml.text 'Within radius of '}
               xml.td { 
                 xml << text_field_tag('radius',"style1")
                 xml.text 'miles'
                 fields <<"radius" 
               }
       end 
       xml.tr do
         xml.td { xml.text 'Only prices within last '}
         xml.td { xml << text_field_tag('filtertime')}
         fields= "filtertime" 
       end
       xml.tr { xml.td {submit_tag('Search',action,fields,x)} }
     end # table
   end # form
 end # body
end #doctype

.RXML templates such as the one above let the Rails view be generated as code, instead of the sometimes awkward mixture of tags and code, which of course would then tie the view to a concrete set of tags. Once its all code then “smart helper functions” (select_tag(), text_field_tag(), submit_tag(), form(), body(), head() and doctype()) can be used to generate the tags for each client ype. These functions use the “client type” variable (populated with a Rails :before_filter function as described earlier) to generate the appropriate the appropriate underlying tag output over the wire. The HTML generation is straightforward. Generating WAP and our own rich client tags is more involved.

The result for the app developer is one set of code, both controller and views, to deliver multiple output types. This is something I haven’t seen in Rails apps (or other apps for that matter) whether its targeting mobile scenarios or otherwise.

Just what it means to generate tags which generate our rich client interfaces (i.e. what is the output if the client type is “Mobio Runner”) is the subject of several future posts.

issues with Agile

Posted in software development on April 30, 2007 by Adam
When people hear about the many new mobile services that we have introduced and are continuing to introduce very rapidly they say “so are you doing agile development?”. At one level, we are building a platform that allows our mobile services to be built very quickly and which facilitates easy iteration and development of features for those services. So we certainly want to facilitate “agile development” of mobile services, where agile has a lowercase a in the beginning. As to whether on our core platform and tools we develop those with the Agile Development model such as Scrum, the answer is probably no, although many of processes that we follow (daily brief status meetings, continuous builds, short release cycles, primacy of the bug database) may seem to have parallels there.That said, here are my “top ten issues with Agile Development and Scrum” especially as applied to the development of services These points came out of a talk today that I gave at the New Software Industry conference sponsored by Carnegie Mellon and UC Berkeley.

bias against upfront analysis in conflict with service-oriented development

Agile advocates wax poetic about the inability to determine requirements ahead of time. They state its just impossible to know a system’s inputs and outputs and cite reams “industrial process control science” to do so. But service oriented development applied to a large system hinges on doing just this: determining the major services required by a system, and then designing contracts for their implementation. That investment in some small amount of upfront analysis (scoffed at by agile advocates) pays off in allowing multiple teams to build services independently as long as they comply with the designed contracts.

flaws in the team scaling model

Agile advocates recommend a “scrum of scrums” with scrum leaders represented to manage cross-team dependency. They actually need this because there is no emphasis on service contracts. Problems need to get hashed out in a broader freewheeling way. With service contracts issues are far more likely to get solved bilaterally between the service provider and the service consumer.

“old wine in new bottles”

Much of the better ideas of agile have been done for a long period of time in the more successful development organizations. Such ideas include running projects day to day and driving all work from the bug database (the Product Backlog). At Microsoft and Google and many other companies this is almost a religion. Small teams (7+-2). Daily short status meetings run roundrobin. Continuous builds. Small code reviews before checkins (almost XP). Short releases (less than three months). Due to being fortunate enough to be around some great engineering leaders I’ve been doing projects this way for over 15 years. So why is it a problem to call this agile or Scrum and give these best practice artifacts new names? Teams can lose some of the process maturity and nuance which they’ve built around usage of these techniques.

wholesale process change or “incrementalism” and “team ownership of process”

As mentioned, and acknowledged by agile gurus (such as Schwaber and Beedle) in the better shops most of these best process techniques are already being applied. However, there are almost always improvements that can be made in how a team approaches the development cycle and day to day work. I have found that building on what works already and incrementally changing processes (i.e. not every change in one release cycle, which I also like to be short) works best.I’ve also found that the best thing is to have an open and questioning attitude towards the best next incremental process change, involving all team members in deciding on changes, is valuable. The team that feels that they have made a decision about what the process should be has much better buyin, compliance and consistent volunteer enthusiastic participation in the process. This is better than “ok folks, now were doing Scrum”. Another way of putting it is for any team or organization considering introducing a new methodology (or “antimethodology”) is “what problem in the existing process are you trying to solve?”. Then go solve that problem directly rather than starting from scratch.

arbitrary timeboxes

Almost all of the timelines and timeboxes for specific events and durations in Scrum are arbitrary. This ranges from daily status meetings, to every other time cycle, meeting or artifact. Doesn’t seem very agile to not react to the exigencies of a specific projects. For example, the one month “Sprints” are a very arbitrary timeline and it is often difficult to build risky and time consuming blocks of features within those timeboxes. Agile advocates acknowledge this and say cite examples of releases with two Sprints: a “feature sprint” and a stabilization sprint. This sounds not just arbitrary but pretty mushy and vague as well. More specifically…

one month Sprints

I’ve already mentioned my generic issues with the arbitrariness of one month Sprints. There is the practicability of such timeframes. Anecdotally (after working on dozens of software projects in my career) my experience is that this can work for “end applications” or simpler websites even from “version 1”. My experience is that early releases of platform, infrastructure or tools code this is not quite feasible. I do accept that in either “app” or “platform” code several releases into a products lifetime the one month Sprints are feasible. There is still then the issue of the arbitrariness and optimality of such a timeframe (there is a high overhead of QA and integration effort for such a short cycle). Again, the question is “what problem are you trying to solve?”. Given that stabilization overhead is decreased as a percentage of effort in a two month release, why is one month better. It may be that “the business” (operations, marketing, sales and of course the users themselves) are not ready to make maximum use of a monthly release.Additionally the “way out” is to say that most projects are much larger than one month sprints and that a project is inherently “a bunch of Sprints”. There are already more mature existing best practices for “themed incremental milestones”. And most successful software-focused organizations have evolved more subtlety and insight about how to handle such complexity than is evident in most agile screeds.

arbitrary guidelines are presented with pseudoscientific justifications of dubious certitude

Much of agile practices that I’ve seen recommended have a core of common sense. My big issue is that the best organizations are already working at a level of sophistication better than those approximations. And those approximations are dressed with questionable garb from parallels to other sciences. For example, 7 as team size doesn’t have much to do with brains ability to handle 7+-2 objects [Schwaber]. In some prominent agile writings, other parallels are drawn to process control, chaos theory and physics . The problem with the dubious connections is that they are held as evidence for arbitrary decisions, rather than grounding them on evidence from development projects themselves.

artificial straw man of traditional best practices

The process and the foibles of “waterfall model” (does anyone really espouse such a completely serial model anymore or use that term to describe how they do things?) mentioned in Fowler’s Agile Manifesto and other books such as Schwaber and Beedle’s Agile Manifesto, doesn’t resemble anything conducted in dozens of projects that I’ve participated in. More normal is that even in the early 90s I worked on teams where product managers prepared requirements, program managers and dev leads wrote specs, devs prototyped code, and QA leads wrote test plans simultaneously early in a project. The desirability of such an approach that was anything but “waterfall” was recognized by most major software companies long before anyone ever used the “agile development”. Microsoft codified this in written form as the Microsoft Solutions Framework. But when I was there it was just “how stuff got done”. Books such as “Rapid Development” summarized what had already become industry wide recognized best practices. But the “words for things” remained the same as they were in the eighties. Much of Agile seems to be an attempt to put new words around widely accepted best practices, and arbitrary guidelines (one month sprints, 7 person teams) around reasonable defaults that most development shops were already more nuanced about.

optimal software process is an evolution

The terminology, milestones and deliverables in “conventional” ideas about the software process has evolved over many years in evolutionary (not revolutionary) response to changing technology and customer expectations. It can and should continue to evolve. For example, there may be deliverables that can be abbreviated or intentionally skipped (e.g. a requirements phase may be folded into a smaller specification). But it doesn’t hurt to make that choice consciously.

modern software tools can make “old artifacts” lightweight and even magnify their original value

Even more powerful is using modern tools to make such steps in the process much lighter weight, collaborative and distributed. As an example, take the case of two bogeymen of “agile development”: the idea that upfront requirements and specifications are too timeconsuming to be worthwhile. In our shop feature requests collect in the bug database (e.g bugzilla or trac) for a subsequent release (similar to a Scrum Product Backlog). Related sets of features are written up in narrative form on a requirements wiki page often initially by a product manager (this is a chance to include use cases when and where helpful). But they are commented on, and revised (with change control history of course) by many interested team members including the likely assigned developers and QA.After this brief in person requirements review, the feature items are assigned to a responsible engineer for spec and implementation (along with other related features). The dev writes a few paragraphs about the feature set on a specs wiki page, which again invites comments from and revisions by interested parties such as PMs, other devs or QA. As with the requirements wiki page, the value of the spec page is to allowed related features to be discussed, analyzed and described in toto. The scope of just how much specificity is required or optimal is determined by the dev. Often much of the content from the specs page can be leveraged later in a user manual if one is required. More often the specs page is all that is required for anyone who needs to use or maintain the software. When the spec is complete the dev holds a timeboxed spec review with interested parties, and if there is consensus on the approach proceeds with implementation. Its difficult to find the wasted overhead in what is described, and use of tools such as wikis can make these steps introduce very little overhead and slowdown to the process. In my shop we’ve automates the linkages between the wiki and bug database to make this even easier.

There’s nothing magic about this approach. The overall point is that these “traditional artifacts” of requirements and specifications are easier to do with modern tools and their value is I think even higher with the ability of more stakeholders to contribute to and benefit from the process.

In summary, on balance among an existing high performing team, I prefer to see more familiar processes and nomenclature iteratively refined to meet the needs of a particular business, product or team, rather than starting afresh with a new methodology (even one that claims not be a methodology) with a fresh set of names.

mobile Ajax

Posted in mobile platforms on March 12, 2007 by Adam
I’ve gotten a lot of response to my posting on WAP. The predominant theme has not actually been defending WAP in any way. But instead has been “ok but what about mobile Ajax?”. Mobile Ajax, of the type capable by browsers such as Opera and Good Technology’s Good Mobile Intranet provides several of the capabilities missing in WAP apps. Specifically it will allow for executing logic locally, such as rich validation and dynamic forms. It also provides some separation of presentation and data in the ability to download data asynchronously without browser roundtrips.While Mobile Ajax is a big step forward over WAP and provides a much better user experience. It still has several limitations:

Working with Data Offline

Mobile Ajax-based applications do not allow for working with data offline. Native apps can be installed locally and run locally while completely disconnected. Inevitably some platform vendors will try to resolve this with proprietary extensions, similar to AvantGo’s “mobile offline browsing” that they’ve been trying to get adoption of for years. This may work for information feeds (the sweet spot for products like AvantGo). I don’t see it working for transactional applications or websites. Or any apps or web pages with user interaction (which should be a hallmark of web 2.0 properties right?). By contrast, intelligently design natively executing applications handle these offline scenarios quite nicely.

Taking Advantage of Device Capabilities

Browsers, including Ajax browsers, are all about making devices seem the same. On desktops, distinguished for the most part only by differences in screen resolution, this assumption has become more and more true. Mobile devices however are extremely heterogeneous: a dizzying array of screen resolutions, input and directional keys and widgets, and differing sound and video capabilities built into the hardware. That diversity is only increasing. To truly take advantage of a device’s native capabilities in any of these areas, a native application written to the device OS’s native APIs is necessary.

Industry Recognition

These are not observations unique to us. Large web properties such as Google and Yahoo are aggressively putting out native mobile applications for accessing horizontal consumer services such as maps or email. They are doing this with J2ME apps, Palm OS apps (despite the dubious future of the Palm OS) and even, horrors, Windows Mobile. This despite their status as highly competent web Ajax sites, who would presumably be in a great position to build mobile Ajax sites for these services.

Difficulty of Development

The problem with doing native applications is the difficulty and time to do it well. Beyond the diversity of device operating systems (J2ME, Windows Mobile, BREW, Symbian, Palm, RIM), there’s just the inherent step function of writing an actual application in procedural code (and many flavors of them) versus the convenience of maintaining a server-side web application.

The effort in “going native” pays off for truly massively horizontal applications such as email and maps. However, if you go beyond the top dozen or so web applications, to those with less than tens of millions of users, the effort necessary to build a mobile equivalent becomes more difficult to justify. Especially as those web properties or connected applications become more and more about user interaction or transactions.

Does this mean that Mobile Ajax will become predominant in those situations (for those sites or applications that can’t afford to “go native”)? I don’t think so. Most sites are still not aggressively using Ajax. Its just much more work to do so across a diversity of site functions, than to build more ordinary web pages. And once you do so, that mobile Ajax site is still of limited use since it can’t be used offline and is not necessarily taking advantage of unique device capabilities.

A Better Way?

So what are we, Mobio, doing? As you can guess, we build mobile applications that do run natively on mobile devices. As you will see from our forthcoming applications, we do manage to do them across many devices, with applications that are not as broadly applicable as email or maps.

We can and will take advantage of mobile Ajax browsers in the future on platforms that we may not choose to target directly. As mentioned you can create attractive applications with mobile Ajax, but it will be missing some of the offline and device-specific capabilities available with native applications.

However in either of these cases we do our authoring and application development in a way that manages to abstract us from much of the drudgery of creating device and OS optimized native applications, and even allows generation of mobile Ajax apps from the same source code. The way that we do this is the subject of future posts.

mobile 2.0

Posted in mobile platforms on March 7, 2007 by Adam
Following up on my previous post on how we believe the best mobile apps are built , I’ve been asked what the somewhat nebulous term “mobile 2.0” means to me. From a technology perspective I would summarize it as the following things:

  • locally executing applications that work to some level even when disconnected and thus have…
  • locally cached data
  • leveraging relevant open standards where possible. for example be opportunistic about the presence of Opera, Minimo, IE, Nokia on peoples devices
  • controls that are optimized for the capabilities of a particular device and the unique functionality required by mobile apps. there is a natural tension between this goal and the previous point of just using a browser

From a user experience perspective I would suggest that it is the following three things:

  • “context and user aware applications”: delivering the app to users with knowledge of their preferences, their previous uses, their defaults and their likely current actions, to accelerate their tasks and put them into the right form/page/screen/action without a lot of navigation
  • “shared knowledge among applications” – in order to ease the previous goal of minimizing user data entry and navigation on mobile devices, applications should share information about user profiles, preferences, authentication, past usage and defaults. in this context its worth the extra effort to perform this integration
  • user interaction and content contribution – this is probably the only point that truly echoes web 2.0. mobile 2.0 apps should incorporate user feedback and results in intelligently rapid ways

From a business perspective the following trends will drive massive user adoption and make mobile applications tools that everyone uses

  • low to no cost through an advertising and transaction revenue model – users shouldn’t have to pay for services
  • cheap to free data plans – this still has to happen and may be the biggest barrier to entry
  • tools and platforms to make the nontrivial technology and user experiences described above happen for many applications and content in as cost effective and rapid way as it did for the web

WAP and mobile application development

Posted in mobile platforms on February 10, 2007 by Adam
Some of you may be using our Mobile Movie Times application or seen the demos of our various other mobile consumer apps. You may say “that sure looks a lot better than WAP”. Yes, all of our apps are locally executing rich clients optimized for their devices. Many of you may have noticed that most consumer-focused large web properties are pursuing (google, yahoo, ebay) are also not building WAP portals but instead focusing on locally executing optimized apps targeting a particular platform. So why is this? WAP’s been around for years. What’s wrong with it?The big problems with WAP that we see are no ability to execute functionality locally, lack of separation of presentation and data, a murky future standards picture, no ability to work with data offline, and no ability to take advantage of an increasingly diverse set of mobile device capabilities.For the record, there are several applications and use cases where none of these are problems, especially one way delivery of text only information. It may be that in an effort to allow an application to run on as many platforms as possible without rewriting them as native locally executing apps that use of WAP on some devices can be a way to multiply the payoff of investing in building the backend of a mobile application. That said, for most applications the following issues all inhibit WAP from delivering a truly optimal user experience on targeted devices<h3>Executing Logic Locally</h3>
The best applications (mobile, web-based and desktop) all execute some portion of their logic locally. This can be for:

  • validation
  • modifying forms, menus and screen display based on user behavior
  • rapidly showing different data based on user actions and interests without roundtrips to the server

These ideas are behind much of the current trend of Ajax-oriented websites: highly responsive, interactive, adaptive web pages and user interfaces.

WAP sites don’t do this and they don’t have enough of a scripting model to enable these features. AJAX-capable browsers can try to do these things (the pros and cons of mobile Ajax browsers are the subject of another post). But most web and mobile app providers trying to create optimal user experiences are building “native applications” in the APIs of the mobile platform to achieve these benefits.

Separating Presentation and Data

Neither WAP nor traditional HTML pages separate presentation from data. Form controls and their embedded data are tied together in one stream of information. This is why most WAP sites that interact with users are painfully slow. By contrast, optimized native applications run locally on the device and send only the changed data back and forth. It is possible to approximate these benefits with Ajax and other use of Javascript on a mobile browser, but it is inherently more difficult and the data exchange less efficient (again, we’ll talk more about Ajax browsers later).

Future Standards

At one time it seemed like the mobile web industry might come together on some standards based on some derivative of both WAP and XHTML Mobile Profile. That effort ended with major vendors each taking their own tack on the mobile web using divergent subsets of a proposed WAP 2.0 standard. We’ll talk more about that later. But at this point, it is not a controversial statement that WAP 2.0 and XHTML MP are not being used consistently and robustly by major mobile software providers and device manufacturers.

Working with Data Offline

Mobile networks are still not always available with 100% coverage and this is likely to remain true for quite a while. The best mobile applications (such as Blackberry’s email) are optimized for syncing intermittently and transparently and allowing the user to use the application offline. The best usage experience even for “vertical” single purpose apps are written to the native device operating system, making optimal use of available storage, trickle syncing data to and from the server transparently to the user, and allowing the user to be as interactive as possible when not having coverage. WAP browsers (and Ajax browsers) are not capable of this.

Taking Advantage of Device Capabilities

Mobile phones are a diverse lot and, with the emergence of new “more than just voice call” features, becoming moreso by the day. There are several major device operating systems: BREW, Symbian, J2ME, Linux, Windows Mobile to name a few. More importantly phones have an increasingly differentiated set of capabilities: everything from taking pictures to recording voice to playing music to syncing contacts and PIM information to voice dialing to GPS. The best applications take full and optimal advantage each device’s capabilities in all of those areas and more. Browser based applications (of any flavor) just don’t do this. To do this requires writing to the APIs of the device, even when a device uses supposedly “open API” such as JSR extensions to J2ME for specific capabilities.

So How Do We Build Such Optimized Native Apps Efficiently?

All this said, its quite expensive and difficult to write application to these various devices in their native operating systems with their unique APIs in C, C++, or Java. And it becomes moreso once you want to target multiple devices. J2ME offers a variety of devices, but certainly not the majority of the market.

But this hasn’t daunted the the biggest web properties(such as Yahoo and Google) with extremely horizontal apps such as email and maps that can be targeted to many millions of users. The cost of hiring large teams of developers to spend months to years writing apps can pay off in these situations.

But is it possible to write apps at a higher level than the native procedural languages (maybe even higher level than WAP or HTML)? And can we do so and still allow single applications to be executed on a wide variety of other devices? And perhaps even better allow the apps to function at a gracefully degraded level by delivering content WAP and AJAX browsers on platforms and devices that its not economical to devote a lot of attention (while still just “writing them once”, as is referred to in another post here). All this is a topic for another post some time soon.

model oh model, where is my model?

Posted in Rails on February 9, 2007 by Adam
One of the strengths of Rails and ActiveRecord is that it makes it easy to build your models. You Don’t Repeat Yourself, with your model attributes being derived directly from your database tables. Then all you have in your model definition itself is a discussion of its relationship to other models, relationship to tables and

There are problems introduced by this power however. When writing your application, where is your model definition? Your attributes are generally all in your table definitions. Your models relationship to other models is in your model definition file. Your validations are often in your model, unless your database is performing some form of validation (a practice that is generally encouraged outside the scope of Rails). So they are usually in both places! Plus there’s a good chance in your Rails application that you have introduced at least one or two views to handle things that Rails needs (such as the need for a single column primary key). So now there’s a third place for your model attributes definition.

Rails introduces another way of handling database definition and versioning: rake migration scripts. These are very powerful ways to version and maintain database schemas and data in a product neutral way. But they also spread the definition of “what is the model” out even further. I talk to Rails developers who say “I never look at my database administration tool – I just incrementally edit my Rake migrations”. The problem is that distributes the model definition out even further! It is spread amongst multiple rake migrate scripts.

Many developers used to programming with other tools and environments are also accustomed to some form of graphical frontend to define and manage their database tables. This can be at the “physical level” of tables or at a more conceptual level of entities and relationship (with tools such as ERWin or Visio). Such environments not only ease understanding and ease of manipulation and modify database (and model) relationships but they are also much better at concentrating the model definition into one place.

So here’s my suggestion: An incredibly useful improvement to the Rails app development toolset would be a graphical “model and rake migration designer” frontend. Probably an Eclipse plugin that plays nicely with RadRails (part of RadRails in the future?).

This frontend would allow you to define models, graphically draw relationships between models choosing from amongst the Rails palette of possible model relationships, and generate rake migration scripts from model attribute definitions. It would generate increment rake migrations when model changes were made. It would also allow you to set validations on attributes and models that would be reflected in the model definitions themselves. It would generate migration scripts for join tables when called for by as has_and_belongs_to_many relationship. There are a lot of other features I’d like to see in this tool, but I’ll leave off for now. No, I’m not considering building this. But I and many other Rails developers I know would be delighted by having this tool to use.

Rails, REST and SOAP web services

Posted in Rails, software development on February 3, 2007 by Adam
I’ve spent a lot of my career in the web services world (before it was even called web services). So a lot of my colleagues often ask me “you like Rails?! how is that possible, the Rails community hates SOAP-based web services?”. I’ve often wondered about this myself. More specifically I’ve wondered at the seemingly religious antipathy of the Rails community (including the illustrious DHH himself) to the “WS-Deathstar”.So (primarily for the people who are confused by a web services’ zealots embrace of Rails) here’s my take on the matter: REST is great for many problems and applications. The first class support that DHH is providing for REST in the Rails core is very useful. I love the scaffold_resource generator and what it provides (as I’ve mentioned earlier here). I also like the approach to providing it: the idea of single controllers that respond to different clients (HTML, REST, and others) with different content. I do have reservations about the need to put wordy respond_to clauses in each controller action (I’ve proposed what I think is a simpler way here). Anyway, automatic support for REST is a good thing. REST is a great way to build simple point to point distributed app to app connectivity. And I agree that, for someone who wants to be aware of the innards of the code they are working with that it is simpler. I also agree that there are scenarios where a REST interface just makes it easier for multiple arbitrary clients on more diverse platforms and languages (there is perhaps stil a small chance that it may be difficult to integrate with a client SOAP stack on a device, but there’s basically no chance that it will be hard to invoke a REST interface from another platform).

But does that mean that SOAP is bad? In the simple app to app connectivity scenario my opinion is that it basically doesn’t matter. In the vast majority of languages and platforms the work of consuming a SOAP-based web service is just as easy as consuming a REST service. But I agree that there’s little tangible advantage to using SOAP if all you ever need is a distributed method call.

The value of the SOAPbased WS-Deathstar is when more complex connectivity is required: multihop message routing, guaranteed delivery, publish-subscribe one-to-many integration, more advanced security in the plumbing than https (the latter probably only necessary when you’re routing messages). If this is never going to be necessary (probably true for the typical consumer-facing website), then the whole SOAP vs. REST debate is moot. And going with the default easier path of REST doesn’t hurt one whit. For more advanced distributed connectivity problems (endemic to many “enterprise applications” that are seemingly anathema to the Rails community), robust SOAP stack with extensive WS-Deathstar support would be useful for developers building complex connectivity. And yes I mean a whole bunch of those nasty evil WS-* specs, including those not yet widely implemented. My top requests include WS-Addressing, WS-Eventing, and full WS-Security. Doing these would also obviate the need to provide other APIs for such such services as publish/subscribe message routing. Due to Ruby’s stunning productivity advantages I think any and all of these would be trivial to implement. I remember implementing a very early WS-Eventing client prototype in C# in about a page of code. It would be much easier in Ruby (if there’s demand for it, I’d consider rewriting it, if only to demonstrate that a seemingly complex spec can in fact be made easy to use).

Are the WS-* overly complex? Yes, I think many of them are. There are plenty of “design by committee” artifacts in there. For the most part such unnecessary complexity can be hidden from the programmer using them. A good API with reasonable default behavior (convention over configuration) covers a thousand design sins. And the layered nature means that you can just pick and choose what facilities help you. Embracing the Deathstar specs would make Ruby a more powerful tool for a wider set of applications. The availability of Rails plugins (or just Ruby libraries) for these capabilities would make it more likely that Ruby gets used for a wider variety of programming tasks which can only help to grow and advance the Ruby and Rails community.