I’ve written before on the ways that banks could better serve their customers by improving the security of third-party data access and organizing their teams to nurture a culture of innovation. These changes are important because banks are missing out on the power of an API-enabled banking platform on top of which third-parties can build innovative new products. There are lots of new capabilities that could be created by integrating banking services that would absolutely delight customers. For today, I’ll provide just a few examples of what I mean by API-enabled banking platforms and the types of services I’d love to build with them.Read on →
The mobile payments space is alive and well despite some speculation to the contrary, such as the recent suggestion that American Express has simply given up on it. From that article:
“It appears as if American Express is tired of waiting for the mobile payment craze to kick off and is taking the matter into it’s own cards, and in this case awards points. In fact, one of Amex’s points is that the program is a seamless integration that does nothing to change the way you currently pay for a cab.”
The author suggests that American Express’ latest offering – the ability to automatically use points when paying for a cab ride – be read as abandoning mobile payments. I disagree. I believe this program was designed to provide additional creative ways for cardholders to redeem points. American Express customers may like this, but American Express also has an incentive: outstanding reward point balances create a liability on the balance sheet. The more ways customers have to redeem points and relieve the liability pressure on the balance sheet, the less likely it is for large point balances to build up. What American Express has done with cab payments is no different than the integration with Amazon to shop with points.
However, the broader sentiment expressed by the author is important. In particular, he points out that:
“There is nothing about paying with a mobile phone that is better than paying with a credit card. So the perceived risk of security is certainly not worth it in the eyes of most consumers. The fact that the risk of fraud is extremely low for those making payments (rational information) makes no difference either.”
This is the biggest problem with (most) mobile payment solutions as they have been implemented to date. Consumers are willing to use mobile payments when they are better. The mobile payment solutions that have offered loyalty benefits (like the Starbucks mobile app, the most successful mobile payment implementation in the US to date) or a truly better experience (like Square Wallet with the ability to auto check-in and pay) have had success with adoption. Expecting consumers to change their behavior when they get nothing in return is foolish; consumers need to get something in return for changing how they behave.
The real innovation in mobile payments isn’t in the plumbing of moving money around (card networks vs stored value vs ACH). All of those details are interesting only to payment geeks. The real innovation in mobile payments is in the new, value-added capabilities consumers will get that is built on top of that plumbing. Lots of people are working on mobile payments innovation and American Express’ cab payments with points has nothing to do with it.
If you see a Postgres out of memory error, it can be very alarming, particularly because this type of error tends to appear at the worst possible time: when you’re trying to scale your application. In a Rails environment, this type of error manifests itself as an ActiveRecord error:
Experiencing this error on Heroku Postgres can be even more unsettling because your immediate response may be “add more memory”, except doing so requires provisioning a new, larger database as a follower, waiting for it’s commit log to catch up, then going in to maintenance mode and switching it over. Resist the urge to do this for just a few minutes, remain calm, and do a little investigation!
If you’ve never seen the error in your application before and it appears under heavy load, there is a chance that you have a poorly performing query and throwing more memory at Postgres may not resolve the problem. It’s worth taking a few minutes to understand what’s going on before you spend time taking action on a problem with an unknown cause. Luckily, Heroku makes this easy.
First, install the Heroku pg-extras plugin if you don’t already have it installed:
Second, look for running queries against the database:
In the output, look for queries that are idle or several queries with the same structure even if they have different where clauses. If you see this, there is a chance that these queries are what is causing the Postgres out of memory error. Look at the query plan for those queries by executing an EXPLAIN on the query in question. If the query is expensive (lots of rows, sequential scans, multiple hash joins, etc.) you can further confirm they are the source of the problem by killing the query, again using the Heroku Postgres extras:
where procpid is the PID shown in the output from the heroku pg:ps command. If this temporarily resolves the problem, it’s a good sign that the queries you killed are causing the out of memory problem.
At this point, a permanent resolution to the problem probably requires improving the performance of the query. If the number of simultaneous problematic queries is somehow constrained, adding memory to Postgres may mask the problem. However, if the number of simultaneous problematic queries is a function of load on your application, adding memory will only mask the problem until you hit the next tipping point; eventually the problem must be fixed. Refactoring the query to limit the number of rows or eliminate sequential scans is the best place to start.
Everyone wants their application to be fast, but response times and payload sizes make a particularly big difference when you have a server application that is driving a mobile experience or, more generally, providing an API to users. There are some things that should be done to tune the performance of any application like load testing, performance profiling, database query optimization and so on. Once you do all those things, there are a few additional things that can be done to further improve the performance for mobile or API clients.
Enable gzip compression for responses
While it’s true that this helps regular web users too, the size of the payload going across the wire is especially important to mobile users and consumers of API services. In countries where mobile users pay for their data consumption and monthly allowances are heavily constrained, this can make a big difference for your customer’s wallets, too.
If you’re hosting your application on Heroku, the heroku-deflater gem makes this dead simple with the added benefit of not compressing images from your asset pipeline. If you’re using nginx, you can read an overview of gzip and other performance improving configuration settings here.
Shrink JSON responses and understand the performance of marshalling objects to JSON in your production environment
Marshalling objects to JSON can be surprisingly expensive on virtualized servers (like Heroku or AWS) and is sensitive to the variable nature of available CPU cycles. It’s worth verifying that the absolute smallest payload is being returned to the client; serializing unnecessary data from your model objects is very wasteful. Once you’re sure your response object content is minimized, profile the performance of the JSON serialization in a production-like environment. This is extremely important. I guarantee you the performance you see on a virtualized server is going to be far slower than what you see locally on your quad-core laptop. Do not guess. Do not assume your local environment is a good indication of how this performs in production. It is not. Something simple like this will tell you right away what is going on:
1 2 3 4 5 6
If JSON object marshalling is a significant component of your overall response time, consider alternative marshallers
There are alternative JSON object marshallers that perform very well. One that I’ve had good luck with is Oj. In one application, using Oj along with creating a simple hash object with the content to marshall to JSON reduced the average time to marshall from 7 seconds to less than 1 second.
Consider caching frequently used, infrequently changing reference data in marshalled JSON form in memory
Given the high cost of querying and marshalling data in to JSON, it may be worth it to use in-memory caching to store the data in JSON form. For data that changes infrequently, or where cache invalidation can be managed effectively, this allows you to respond to requests for the data very quickly. The response can then be rendered from the controller directly as text:
1 2 3
While Rails logging is usually a very small component of overall response time of an application, there are cases where the overhead of logging can significantly degrade performance. In one application hosted on Heroku with significant database-intensive operations for several controller methods, disabling ActiveRecord logging resulted in a 34% improvement in response time. (To give you a rough idea of what I mean by database-intensive, the average response time for these requests were ~7 seconds with >90% of that time taken up by ActiveRecord activity. The log output from ActiveRecord-related log entries in this application exceeded 100gb a month.)
Disabling logging is easily done by adding this block in your environment configuration:
1 2 3 4
Different hosting environments will likely see different performance benefits; I suspect hardware plays a large role. The performance improvement was negligible when I benchmarked the same change in my local development environment that wrote logs to SSD storage, for example.
Bottom line: your mileage may vary, but disabling AR logging may be an easy way to improve performance of database-intensive operations.
Minimizing Heroku Slug Size When You Have a Lot of Image Assets Using CloudFront, S3 and the Asset Pipeline
The Rails asset pipeline was a major improvement in organizing static assets in a Rails application. Rails also offers a great mechanism to configure an asset host for rendering static assets including images. Amazon’s AWS CloudFront offering is an easy to configure CDN that can be used in combination with the Rails asset host configuration, but if you have a lot of images and host your app on Heroku, this can lead to problems from having a very large slug.Read on →
The banking industry can – and must – do better as it relates to securely enabling limited access to third-parties on behalf of their customers. Account aggregators like Yodlee and Intuit – and the myriad of other products built on top of Yodlee and Intuit aggregation services – rely on customers sharing their account credentials for the service to work. Banks know this and in many cases are active participants in enabling the services provided by account aggregators. And yet, this very fact contradicts many bank’s security recommendations and, in some cases, their own terms and conditions. For example, one of the top 5 U.S. banks has the following their terms and conditions for online banking:
You agree to … keep your passcode secure and strictly confidential, providing it only to authorized signers on your account…
Another top 5 bank’s terms and conditions state:
You agree that … in circumstances where locations of the Website require identification for process, you will establish commercially reasonable security procedures and controls to limit access to your password or other identifying information to authorized individuals.
How many of Mint’s more than 10 million customers are breaking these terms and conditions? Is it really reasonable for banks to expect that customers are abiding by these terms when they themselves enable customers to so easily violate them? How would an average consumer banking customer even know how to “establish commercially reasonable security procedures and controls”?Read on →
It’s no secret that non-traditional bank competitors like PayPal, Dwolla, and Square have taken aim at banks. Five years ago, it was difficult to have a serious conversation inside a bank about the possibility that PayPal would become a serious threat; now many bank executives accept it as a fact. One needs to look no further than the breakneck pace of growth by companies like Square to find evidence of the value that a non-traditional approach to banking products brings to the marketplace. Some bank executives will say that these new competitors have an unfair advantage because they are less regulated than “real” banks. While there is an element of truth to this, the regulatory playing field is becoming increasingly level in the United States every day. Other bankers will decry “legacy systems” or other IT-related causes that make the banks disadvantaged. Again, there is some truth to these claims, but neither technology nor regulatory constraints fully explain how the banks ended up playing defense against the fintech startups.
The reality is that most traditional banks have a much deeper and more fundamental impediment to innovate like their non-traditional competitors. Most of these legacy banking institutions are structurally and organizationally ill-prepared to bring to market products that are predominantly technology-centric. Many banks have been organized around “products” that are really services: checking accounts, cash management, credit cards, merchant accounts and the like. While there is certainly a technology component to these offerings, that technology has historically played a supporting role. Technical solutions – those that are primarily exposed or consumed using APIs or mobile applications – have not been the dominant feature of the products or services banks have offered; they’ve merely been an enabler.Read on →
The xml-mapping gem provides a very useful interface if you want to be able to easily marshal data between XML and native Ruby objects. It’s great at mapping XML to Ruby documents, but there is one potential catch when you want to marshal a Ruby object into XML: adding namespaces that may be required by the parser digesting that message. Luckily the gem provides an easy hook to fix this with the post_save method:
The post_save method is called after the xml-mapping gem has marshalled the source Ruby object into an REXML::Element object but before it is returned. Within this method you can manipulate the XML and insert the namespaces that are required for the message to be parsed correctly. One other gotcha is that setting the default namespace using REXML’s add_namespace did not work for me – instead you can set this namespace using add_attributes.
The foreigner gem is great for creating foreign key constraints with ActiveRecord. One potential problem arises when you want to destroy a record that is the foreign key in another model. In situations like this, it’s desirable to know before attempting calling the destroy method that the destroy will fail. (For example, to notify the user or avoid presenting a link for destroying the record.) This capability isn’t native in ActiveRecord but can be added with the following monkey patch:
With this patch you can just check if .can_destroy? is true before attempting the destroy or providing the user the option.