Testing in iOS

I’ve been researching Continuous Integration (CI) and unit testing in iOS lately and learning a lot about testing your application. So far, I’ve learned how to write unit tests in Xcode, some basic guidelines for writing those tests, and experimented with Jenkins CI server to create automated builds, tests, email notifications, and archives. This post is meant to be a collection of my thoughts and takeaways.

Two Types of Unit Tests

The are two main types of unit tests in objective-c, logic tests and application tests. There also two general approaches to unit testing, which happen to correlate: bottom up and top down. In bottom up testing, you look test each method or class individually, entirely independently of each other. In top down testing, you test the functionality of the app as a whole.

Because your logic tests are independent of each other, they can run without needing the context of your application’s controllers or views. Logic tests can only be run on the simulator. Application tests are (appropriately) run in the context of a host app, and can be run either on a device or in the simulator.

Efficiency in Unit Testing

After figuring out a how to create unit tests, run them, and seeing the little green checkboxes telling me they passed my first reaction was to get a little unit-test-happy. I was thinking oh, I can test this, and that, and all of that… Well I’m finding that there’s a balance between efficient, high-quality unit tests and simply testing every single input and output.

When you test parsing a JSON response from your server, for example, ONE way to do it is to assert that the final property value is equal to the original value in the JSON. There are however, many more ways to test your parsing and relational mapping. You might try testing for valid data using character sets, checking string lengths are greater than zero, or that birthdates are before today’s date. Then, try 4 or 5 different data sets, rather than a single one.

Basically, rather than test for a specific outcome with your logic tests, be a little more broad. Trying to test for a very specific output might be beneficial in some cases, but it can quickly become tedious and time-consuming. Testing for types of data, unsupported input characters, and invalid states can cover more errant cases in less time. Significantly less time.

The Sweet Spot 

 

image-thumb1.png

I found this graphic in a blog post about unit testing best practices  about halfway through my research and was really glad I did. Essentially, unit tests should hit the sweet spot of testing individual units (bottom-up) or testing the entire system (top-down) and not fall into a dirty hybrid that only costs additional time and effort without proving much.

 

RestKit - Load data from local json file

I've found that it can be very helpful to be able to load data locally rather than from a server, especially for testing and for situations where you don't have control over the availability or stability of the server side. This code sample shows how, using RestKit, you can load json directly from a file. If you knew what the expected server response was, but didn't have access to the server, this would allow you to put all the object mappings in place and load your object in without requiring a live server. https://gist.github.com/kyleclegg/5846568

Google Fiber coming to Provo

Our pre-registrations paid off!

Let's go utah... pre-register now. All of you. 1000Mb/sec #googlefiberfiber.google.com/about/

— Kyle Clegg (@kyle_clegg) July 26, 2012

 

gfiber2.png-fileId=19601385.png

The #EpicProvoAnnoucement hashtag on twitter has truly turned out to be epic. Provo is getting Google Fiber and it makes sense for so many reasons:

Screen Shot 2013-04-17 at 2.02.43 PM

Screen Shot 2013-04-17 at 2.02.43 PM

  • Entrepreneurship - Provo is one of the best places for tech startups outside of Silicon Valley -- on the same level as Austin.
  • Infrastructure - There's an existing infrastructure in place, put there by the city of Provo nearly 10 years ago. It failed (IMO) due to the greed of who decided to limit the speeds to near cable-levels and charge only marginally lower than other providers. Stifling innovation in the pursuit of some extra $$s.
  • Data - Genealogy research is huge in Utah, with several large organizations likely to get on board ASAP.
  • Students - Provo is home to a major university in BYU, with another 25,000+ university ten minutes away in UVU.
  • Innovation - Utah IS Silicon Slopes!

Company Culture

This resonates extremely well for me: "There’s still a traditional view out there that agile methods, hacking, open-source and new technologies don’t have a place in serious business. Our view is that all of those wonderful things power people, businesses, and society forward. We’re obsessed with how things work, are inspired by change, and simply love to build stuff. So we hire people and take on projects that let us do just that."

from http://www.controlgroup.com/careers.html

Ditching mySQL

As a java/OO developer first (web later), I got my start with databases by setting up a couple wordpress blogs, mostly simple UI stuff, but configuration and a few other cases got me into phpMyAdmin and MySQL. I wouldn't be surprised if this was the case for hundreds, or thousands of others. I don't -- or didn't -- mind MySQL so much because honestly it got the job done for those simple blogs and it was easy to get going. However I will say that now that I am surrounded by "production-level" projects, i.e. projects at work that affect millions of users and backends for my own mobile apps, I am extremely concerned about the performance, consistency, (over)complexity, and maintenance of my databases. I've gotten familiar with postgres, and while not fully understanding all its benefits over MySQL, it works great, feels sexy, and posts like this have pushed me to make the move.

Also, using frameworks like Ruby on Rails I feel abstracted far enough from the database level that the change really wasn't too difficult. It makes me wish I hadn't used MySQL in the first place, and started with SQLite because of its support on mobile devices, or Postgres.

Mobile App Competition Results

timeline.png

This post is long in the coming... actually should have been written at the end of November.  Brief recap about Growing Pains and how it took home some awesome awards in the BYU Mobile App Competition.  We had big plans for Growing Pains, but at the time of the submission deadline we were probably only 40-50% done with our first iteration feature set. I honestly was not expecting to take home much from the competition.  The one award I was fairly confident about was the best Ruby on Rails backend, mostly because I was guessing that we were one of the only RoR backends.

My sister Kandace and I were there and were pumped when they announced Growing Pains as a top-16 semifinalist out of 25, with a guaranteed $250 cash prize.  Dru couldn't make it because it was during the day and she would have to miss work.  All 16 semi-finalists gave a 2 minute demo and presentation on their app, which was exciting for me.  I've never made a pitch to 500 people before.

Then the awards... Kandace and I were super stoked when the first award they gave out -- BizVector award for business potential from MokiNetworks -- was given to Growing Pains!  $100 gift card.  Sweet!  Next up the finalists.  Again, the very first app they announced (which just added to the excitement and surprise) was Growing Pains, 5th place with a $1000 cash prize.  Other top apps included a couple games and 2 business productivity apps, with a top cash prize of $3000.  We also won the Ruby on Rails API award, which was a iPad for each team member.  In total we came away with $2100 in awards and prize money, plus a heavy dose of validation and encouragement about our idea and the direction Growing Pains was heading.

finalists.png

We've continued working on Growing Pains and recently started beta testing with couple family members.  If you're interested in giving us some pre-release feedback - let me know!

iOS Final Exam

In 3 hours do: 1) download and parse json for version number and location of zipped sqlite db file 2) if first time or newer version than previously downloaded, download zip using a progress indicator 3) save zip to device documents folder 4) decompress zip file and save 5) delete saved zip file 6) open connection to db, query last updated time and display

Recommended frameworks: afnetworking ziparchive sqlite

and the result.... http://screencast.com/t/UXm4kHuMnuq. i didn't have time to make it look pretty, but I finished all 6 tasks. it took every minute of those 3 hours, but I'm pretty proud of it. :0

Duplicate an SQL Record

When working with SQL databases there are times you want to clone database rows and for whatever reason don't want to write out a ton of INSERT statements.  This would be easily handled by

insert into users select * from user where username="webuser1";

except that this will not handle unique key contraints, i.e. when your ssn or user_id fields must remain unique.  One convenient way to get around the restrictions on unique keys it to create a temporary table, clone the record, change necessary fields, then copy is back to the original table.
CREATE TEMPORARY TABLE users2 ENGINE=MEMORY SELECT * FROM users 
WHERE username="webuser1";
UPDATE users2 SET username=webuser2; ## Change the username to be unique
## Update any other fields that must be unique
INSERT INTO users SELECT * FROM users2;
DROP TABLE users2;

Tablet or Laptop

A past professor recently asked island (the information systems forum/community I participate in) whether he should get a tablet or a laptop for his teenage daughter.  She's been asking for a tablet, and he wants a solution that will work for her for both play and for school and homework.  It's an interesting question and very interesting topic since laptops and tablets are accepted for use in many high schools across the US.  When I was in high school (2001-2005) using a laptop at school wasn't even a consideration.  I wonder what kids will have in 30 years...

Here are my recommendations at around the $300 price:
  1. Nexus 10
  2. iPad 2
  3. Windows Surface (with condition)

The Nexus 10 is going to be pretty ideal for this situation.  I think a high schooler can very reasonably do their studying and homework on a tablet, with the exception of research papers and essays.  For those he or she may need to use the family computer (or at least a keyboard dock), but I wouldn't consider that case enough reasoning to go with a laptop instead of a tablet, especially if they're already wanting a tablet.  I have a Nexus 7 and I really like it for reading and for taking notes.  I love Evernote because it's on every platform I could dream of using (web, iOS, Android, Amazon, OS X, Windows, and even Windows Phone, but not Linux AFAIK).

As another option I think you can get a 16GB iPad 2 in that price range, which would be a great choice as well.  It also wouldn't be brand new, which may or may not be important to a parent.  I know some would rather their teenagers have a older generation version of a project than something fresh off the shelves.
About the Microsoft Surface... if it were 1 year from now, I would recommend getting a used Surface.  It would definitely have all the windows based functionality that someone could need.  However because it just was released Friday and its price point is $499, I don't think it's a good option today.  But it's worth mentioning.

The new Copyright Alert System

Relating to my Information Security class (and just listening to local news driving home from school), I've recently heard quite a bit about the new Copyright Alert System.  I decided to do a little reading and learn more about it.  A lot of my comments come from reading this hackernews article... http://thehackernews.com/2012/10/isps-will-warn-you-about-pirate-content.html#sthash.hqrC94wn.dpbs. The Copyright Alert System (CAS) will begin showing up in the U.S. in late 2012, according to the U.S. Center for Copyright Information. The new Copyright Alert System has partnered with Internet Service Providers (ISPs) such as AT&T, Cablevision, Comcast, Time Warner Cable, and Verizon to deter subscribers from infringement over peer-to-peer networks. Providers’ implementation may vary, but their respective flavors of ISPs are expected to roll out within the next two months.

The new system works by monitoring illegal transferring and downloading of copyrighted files using MarkMonitor, a brand protection company, and issues warnings for infractions. Gradually more severe responses are given to each subsequent infringement, beginning with emailed warnings, escalating to throttled data speeds, and for more serious offenders suspension of service and possible legal action, including severe fines. In addition to protecting original content creators and owners, the CAS system also benefits the ISPs. If accused of illegal activity, offenders can request a review of their network activity by paying a $35 fee. If the offender is found not guilty, their money will be refunded. If they are found guilty, the fee will be kept.

The Center for Copyright Information applauds the new system, saying that it is “designed to make consumers aware of activity that has occurred using their Internet accounts, educate them on how they can prevent such activity from happening again, and provide information about the growing number of ways to access digital content legally.”

“Contrary to many erroneous reports, this is not a ‘six-strikes-and-you’re-out’ system that would result in termination,” the group said in a press release. “There's no ‘strikeout’ in this program.” However, apparently there is some controversy here, because there are rumors of a six-strike limit, yet no given policy on what happens if people continue to download or share pirated files, even after six warnings.

Assets to the Copyright Alert System 1. MarkMonitor – System the monitors network activity to copyrighted media and can detected the illegal sharing and downloading of copyrighted files. Goal – prevent end users from abusing ease of online information exchange by monitoring for illegal activity. 2. ISPs – Previously, identifying illegal downloaders was up to the content owner. ISPs will now play a large role in enforcing this. Goal – Since ISPs have access to all network activity, they can more accurately detect infringers and better penalize users for their negligence or purposeful illegal activity.

Threats to Online Media 1. End users downloading illegal media, such as music. Although this attacker is not the average black hat haxor, this person is still an “attacker” in the sense that they are performing illegal activity. 2. Services that promoting sharing of illegal media, then gain revenue through advertisements on their website. E.G. Megaupload

Weaknesses to Copyright Alert System 1. Users can still transfer copyrighted material via USB or firewire, or some connection not monitored by the ISP. 2. Software that cracks copyrights, which would prevent MarkMonitor from detecting the illegal sharing and downloading.

We should all be aware of the issue on online piracy and how to share media within the confines on the law. Piracy laws are in place to protect businesses and individuals, and as an IT generation and information consumer, we should be aware of the latest technologies in information security from protecting enterprises with hardware or software to protecting content creators with the Copyright Alert System.

Web tracking firm, Compete, settles charges for illegally collecting sensitive user data

I recently read an article published by Ars Technica, one of my favorite websites for tech news, education, and product reviews, as a part of a school assignment and wanted to post my thoughts. I’ve found that Ars Technica is one of the more intelligent and educational technology blogs that exist, which is very refreshing in a web full of tech blogs that want your clicks and try to attract you with gimmicky headlines and juicy gossip. The article is titled "Web tracking firm settles charges it collected passwords, financial data" and recounts the recent happenings surrounding Compete Inc. and their abuse of data tracking, lawsuit, and subsequent settlement. The article was published on 10/22/12 and can be found at http://arstechnica.com/tech-policy/2012/10/web-tracking-firm-settles- charges-it-collected-passwords-financial-data/.

The Massachusetts-based company had agreed to obtain end users’ consent before collecting future data on their browsing history, and had also agreed to anonymize customer data. The Federal Trade Commission (FTC) filed charges against Compete relating to a toolbar that gave consumers “instant access” to information about the websites they visited, as well as a second software package called the Consumer Input Panel that gave consumers the opportunity to win rewards for expressing opinions about products and services. Both software packages did more than they advertised, said the FTC. "In fact, Compete collected more than browsing behavior or addresses of webpages," FTC lawyers wrote in a civil complaint filed in the case. "It collected extensive information about consumers' online activities and transmitted the information in clear readable text to Compete's servers. The data collected included information about all websites visited, all links followed, and the advertisements displayed with the consumer was on a given webpage."

Compete began collecting credit card numbers, social security numbers, and other sensitive data as early as January 2006, and has now agreed to settle up with the charges made against them. The article does not list an amount that Compete must pay, or any other details on what their penalty will be, or what will be done with the data sitting in Compete’s databases, but it does say that Compete will settle.

This article describes a situation that unfortunately is somewhat common. In some cases, the company is identified and brought to court, in other cases it likely goes undetected. All internet users should be aware of the risks of using technology, and specifically in using third-party tools in addition to their web browser. We should be wary of unneeded plug-ins, toolbars, widgets, and other applications that serve a minute purpose and have little industry credibility. In my opinion, trusting company’s like Google and Facebook is a much safer approach to protecting your data and you identity online, because these industry leaders are extremely transparent in how their manage end-users’ data and are increasingly under the microscope in terms of what they do with that data. This scrutiny helps create strict policies and regulations that help protect our data.

Cow Tipping Updates

I decided to put a few hours into my very first Android app, Cow Tipping, over the weekend (originally released 8/1/2011).  If you haven't seen it before, the gameplay consists of tapping on cows repeatedly to make them tip over, trying to tip as many as you can in 20 seconds.  It was a pretty fun first-timer project for me, and since pushing out a bug-patched version 2 last fall, it has now gone untouched for 13 months. After getting familiar with my old code again (which included a lot of "huhs" and "why in the worlds"... n00b mistakes), I started getting pretty excited again.  Believe it or not, Cow Tipping has averaged about 1000 downloads a month since its release, totaling nearly 14,000 total downloads today.   While the app itself doesn't provide much value to anyone, really, and only sits at about 1700 active installs, I started to imagine the possibilities if I invested some time into adding new modes of gameplay or some system of levels.

Screen Shot 2012-10-16 at 9.00.47 AM

Screen Shot 2012-10-16 at 9.00.47 AM

I decided to give it a test run by adding a "Frenzy Mode" in addition to the classic gameplay.  In frenzy mode, cows tip over with a single click, as opposed to the variable number of clicks in the original release (random number between 2 and 5).  Took me about 5 hours to clean things up, add the new gameplay mode, update the high scores screen, and add a Twitter share.  I pushed out the update Monday night at about 7 PM and noticed it live at 9 PM. If it goes well then maybe it'll be worth adding more levels.

Get Cow Tipping on Google Play!

Defending Google

A friend recently posted an article to the Information System forum community that makes the claim that Google Search is only 18.5% Search, citing the following screenshot. See full article.

serp2

To which I respond that I'm going to side with Google here.  For one, that article seems to imply that the remaining 81.5% of the page is filled with irrelevant ad-related content for Google.  Classic example of using favorable "statistics" to prove a point, right?  Quite a bit of the page is taken by white space, menus, and links relating to other google services.  I agree that the author has a point, but to accurately compare the two it seems like search results real estate vs. ad real estate should be compared, not total screen space.  His screen shot also only shows the view from a single monitor.  On my 16:9 screen (I'm guessing his is 4:3) there is much more empty, negative space that makes the page feel balanced and keeps the ads from dominating.  Seems very relative.

My personal opinion is that if you don't like the way a free service supports its business (e.g. ads, soliciting donations, other), then don't use the service.   I feel like free, ad-supported services are a great business model in certain markets and tend to roll my eyes internally when I hear rants about Facebook ads, wikipedia donation requests, and even the less-obtrusive google Adwords.  My brother worked on the bing digital advertising team at Microsoft for a few years, so maybe he rubbed off on me, but I have no problem with a few ads in my search results.  These companies work long and hard to serve up relevant ads that are at best something helpful you could use or are interested in and at worst something you can easily skip over.