Maybe Microsoft is doing it right. Or maybe they're not.

Dell Venue Pro, running Windows Phone 7.5

Since I first began seeing concepts of Windows 8 I kept thinking, huh, maybe Microsoft is onto something here. Despite being both an Android and an iOS fanatic, I have been impressed with the Windows Phone OS since trying out Windows Phone 7.5 from Dec 2011 - March 2012 on an extra Dell Venue Pro my dad had available. The hardware was...a brick... to put it nicely, but the OS itself was excellent. 

Over the last 18 months I've found myself continuously (and to my friends' amusement) defending Windows Phone as a viable mobile operating system, and, as an extension, defending Windows 8. I've gushed over the Lumia 920, HTC 8X, and now the Lumia 1020. I almost bought all three of these devices on numerous occasions (just ask Dru). While I maintain that Windows Phone is a great OS, I'm coming to terms with the idea that maybe Microsoft isn't doing it right and that they did in fact jump the gun on the idea of the convergence of the desktop and the mobile experience. Since MS's release of Windows 8 there's been talks of how revolutionary and visionary it is. Ahead of its time. Some referred to it as showing that Microsoft had not only caught up to Apple and Google, but far surpassed their ability to deliver innovative products that end users didn't even know they wanted until they tried it (take the iPad for example).

Well, after a year of saying to myself, yeah, maybe they did get it right, maybe Windows 8 isn't as radical as all the critics say, maybe we will all be using touchscreen desktop computers in a couple years, I'm now saying maybe not

The truth is that the people don't want the same experience on a tablet as they have on their desktop or laptop. Sure, they like familiarity, but at the core a user's interaction with a tablet and user's interaction with a desktop computer is just different. And people are fine with that. The design of an application on the desktop environment is (and should be) vastly different than the tablet and the phone representations of that same application. Probably one of the best examples of this is the Day One app for iPhone, iPad, and Mac. Ask any active Day One user if they'd like to combine either the iPhone and iPad experience or the iPad and Mac and they'd look at you like you were crazy.

Day One for iPhone, iPad, and Mac

Further evidence that people just don't want to merge desktop computing and mobile computing... look at this Acer commercial comparing their 8" Windows tablet to the iPad mini. This is my opinion, and I bet some Windows 8 fans view this commercial as a win for Acer and Windows 8 Pro, but firstly, who wants to play Halo on a tablet (not that they even look like they're actually playing...)? Don't get me wrong, I love iPad games... Plants vs. Zombies vasebreaker endless mode... that'll keep me busy for HOURS. But I don't really care for the idea of getting a dumbed down PC gaming experience on my iPad. Maybe if the entire game was reimagined for iPad, and it really was just a distant relative of Halo, but the ability to run full-fledged PC games on a tablet just doesn't do much for me. Okay, next flaw in the video, of all the things you can do on a tablet, why show someone accepting changes to a Word doc? I mean, seriously, of all the things I've used a tablet for (thinking my Kindle Fire, Nexus 7, and iPad), I have never once wanted to accept changes to a reviewed Word doc. And if I did, I wouldn't want to be using the traditional Word app to do it (see all those tiny touchpoints?!).

UPDATE: I hadn't done my research on Halo Spartan Assault, and it looks like it very much is  the Halo experience reimagined for touch. That's cool. What I was trying to get at is the idea of playing traditional PC games on a tablet running Windows 8 Pro.

What I'm getting at (and have probably repeated 20 times by now) is that I no longer believe that we are heading in the direction of an integrated desktop and mobile computing environment. Those who never got on the Windows 8 bandwagon are probably thinking, yeah, knew that all along. But for those who have seriously entertained the idea of everything converging into a single, universal experience, either the timing is wrong or the implementation is wrong. Either way, maybe Microsoft isn't doing it right after all.

Configuring Xcode and Bamboo CI Server

NOTE: This document was written with respect to Bamboo 4.4 and version 1.8 of the Xcode plugin. I'll update it to Bamboo 5 and v 2.1 soon.

Here's the process, from top to bottom, including my little discoveries along the way.

Install the Bamboo iOS, Cocoa and Xcode plugin

  • First off, note that Bamboo has its own marketplace (as opposed to one for your JIRA instance). In order to do this, you need to be an administrator in Bamboo (but not JIRA).
  • Navigate to Administration -> Plugins -> Find New Add-ons. Search Xcode and install the plugin. No additional configuration or plugin management needed here. This simply enables a few features like support for adding the iOS SDKs and Xcode tasks, which you'll get to later.

Creating a remote agent in Bamboo

  • From the Administration tab, go to Agents and select install remote agent. You can install it wherever you'd like... I wouldn't recommend your downloads folder though. I put mine in /Documents/dev/bamboo-agent-home.
  • If you have problems running your remote agent to the tune "Could not load properties", you may need to explicitly set your port number or open a port on your firewall. This Atlassian Support Q&A was helpful for addressing this issue.
  • For more information see the Remote Agent Installation Guide.

Configuring Capabilities for your Remote Agent

  • Agents use capabilities for two purposes, (1) to tell the agent what it can do, and (2) in order for jobs to identify which agents can run which build plans.
  • Regarding point 1, you will now create two capabilities that will allow you to build you project for iPhone and for the simulator.
    • From the Agents screen, select your remote agent
    • Select Add Capability
    • Select Xcode SDK as the compatability type
    • Enter Simulator - iOS 6.1 as the SDK Name
    • Enter iphonesimulator6.1 as the SDK Label. ****This part is what took so much time to figure out. In a couple of the blog posts out there it recommends using the command 

     

  • Regarding point 2, this is important because if you don't specify capabilities, your build plans will try to run on all agent that meet their basic requirements. In our case, our web team's build plan's requirements weren't set to the appropriate capabilities, so whenever a build occurred and the OSX remote agent was running, it would try to run on both agents, causing it to fail immediately on our remote agent. Note that custom capabilities are often used to control which jobs will be built by which agents, the most common one I've seen being an isLocal flag, which only allows the local agent to run the build plan.
  • For more information see Agents and capabilities.

Connecting to your GIT repository

  • After you have configured your remote agent, create your plan. This part is fairly straightforward, which the exception of connecting to your GIT repo, which can be tricky.
  • In our case, a user named 'fisheye' has already been setup with the appropriate permissions and keys on neptune. Unless something has changed, don't worry about generating new ssh keys for yourself and putting them up on neptune. Have Jacob login and upload the private key for this user.
  • Note that the repository URL should include ".git" at the end, i.e. ssh://something@path.com/home/git/myapp.git

Creating Tasks for Your Build Plan

  • Task 1 - Source Code Checkout
    • The first task you'll want to create is Source Code Checkout. Actually, it was probably done automatically. Make sure the repository is set to your project and run your build plan. You should be green.
    • At this point, if you run into a NoClassDefFound error, check out this support Q&A. Basically, the fix is to downgrade your Xcode plugin to v1.8 and try again. You can download it at the link at the top of this post. To install it, within Bamboo go to Administration -> Plugins -> Manage Add-ons -> Upload Add-on. My guess is that the latest version of the plugin targets Bamboo 5.0 and at the time of this posting we are running Bamboo 4.4.4. The plugin says it supports 4.4... but yeah...no dice.
  • Task 2 - Build the Xcode project
    • Next up we want to create a task to build the Xcode project. This is where those iOS SDK capabilities that we setup earlier come back into play. The first thing to keep in mind here is that essentially all this task will do is build your project, the same way you would if you built it from terminal using xcodebuild, e.g. xcodebuild -sdk iphonesimulator6.1 -project PRTablet.xcodeproj -alltargets -configuration Release. 
    • My personal suggestion here is to forget Bamboo for a minute and try building your project from the command line. In our case, command line builds failed with compile errors because the main PRTablet target can't see the RestKit subproject files (despite it building correctly in Xcode). To get around this, you'll want to setup a workspace and scheme rather than an xcodeproj and target, which will build all projects in the same directory, referred to as the workspace build directory. Now, all of these files are visible to each project. Verify it now builds successfully from the command line, e.g. xcodebuild -sdk iphonesimulator6.1 -workspace PRTablet.xcworkspace -scheme PRTablet -configuration Release.
    • Now that you know it will build successfully, create a second task under your build plan. Select Add Task -> Xcode. When you're done, it should look something like below.
    • Save and run your build plan. If you got a "possible compilation error" check the logs. If it's an xcodebuild error: "The workspace does not contain a scheme named," you need to go back into your workspace, go to Manage Schemes... and check the Shared box next to your scheme. Push your code and run your build plan again (Bamboo may even do it automatically this time). You should now be green.
Screen Shot 2013-07-26 at 6.27.42 PM.png
  • Task 2.5 - Unit Tests
    • To run have your agent execute your unit tests, you'll need to check the "Include OCUnit Test results" box above. See the Xcode docmentation. Note: I've been unable to get this working so far, since Bamboo for some reason is unable to see our unit tests. The error message reads "Failing task since test cases were expected but none were found." I found a support Q&A for this problem and added comment asking if anyone has found the fix.
    • UPDATE: Looks like unit tests with a test host cannot be run in the simulator in v1.8 of the app. If you can, upgrade to v2.1. In versions 2.0+ of the plugin, you can use the ios-sim method to run application tests from the command line.

Working (or not) with Git submodules

 A BYU CocoaHeads club member recently posted to our google group that she’s been having problems with Git and submodules. There were some great responses, suggestions, and tips, and this is my take on Git submodules.

I’ve had some frustrations with submodules, git, and team projects. Now, whenever installing a third-party framework that has supports submodule installation I generally tend to skip that part, download the most recent stable source code, put it in my git repo, and install it w/o using any submodules. This also gives me the ability to update to new versions more selectively, which I prefer… one reason being that by default some of those submodules may be pointing at active development branches rather than stable, production quality/tested code. If you simply copy and paste their installation commands, later on you might get some buggy code when you update. Of course if you’re careful and read through the changesets before you update, double check the submodule branch is what you want, etc, submodules can be really helpful. I just got burned a couple times when I was starting out with git and since then my personal git workflow has progressed w/o using them for the most part.

Testing in iOS

I’ve been researching Continuous Integration (CI) and unit testing in iOS lately and learning a lot about testing your application. So far, I’ve learned how to write unit tests in Xcode, some basic guidelines for writing those tests, and experimented with Jenkins CI server to create automated builds, tests, email notifications, and archives. This post is meant to be a collection of my thoughts and takeaways.

Two Types of Unit Tests

The are two main types of unit tests in objective-c, logic tests and application tests. There also two general approaches to unit testing, which happen to correlate: bottom up and top down. In bottom up testing, you look test each method or class individually, entirely independently of each other. In top down testing, you test the functionality of the app as a whole.

Because your logic tests are independent of each other, they can run without needing the context of your application’s controllers or views. Logic tests can only be run on the simulator. Application tests are (appropriately) run in the context of a host app, and can be run either on a device or in the simulator.

Efficiency in Unit Testing

After figuring out a how to create unit tests, run them, and seeing the little green checkboxes telling me they passed my first reaction was to get a little unit-test-happy. I was thinking oh, I can test this, and that, and all of that… Well I’m finding that there’s a balance between efficient, high-quality unit tests and simply testing every single input and output.

When you test parsing a JSON response from your server, for example, ONE way to do it is to assert that the final property value is equal to the original value in the JSON. There are however, many more ways to test your parsing and relational mapping. You might try testing for valid data using character sets, checking string lengths are greater than zero, or that birthdates are before today’s date. Then, try 4 or 5 different data sets, rather than a single one.

Basically, rather than test for a specific outcome with your logic tests, be a little more broad. Trying to test for a very specific output might be beneficial in some cases, but it can quickly become tedious and time-consuming. Testing for types of data, unsupported input characters, and invalid states can cover more errant cases in less time. Significantly less time.

The Sweet Spot 

 

image-thumb1.png

I found this graphic in a blog post about unit testing best practices  about halfway through my research and was really glad I did. Essentially, unit tests should hit the sweet spot of testing individual units (bottom-up) or testing the entire system (top-down) and not fall into a dirty hybrid that only costs additional time and effort without proving much.

 

RestKit - Load data from local json file

I've found that it can be very helpful to be able to load data locally rather than from a server, especially for testing and for situations where you don't have control over the availability or stability of the server side. This code sample shows how, using RestKit, you can load json directly from a file. If you knew what the expected server response was, but didn't have access to the server, this would allow you to put all the object mappings in place and load your object in without requiring a live server. https://gist.github.com/kyleclegg/5846568

Google Fiber coming to Provo

Our pre-registrations paid off!

Let's go utah... pre-register now. All of you. 1000Mb/sec #googlefiberfiber.google.com/about/

— Kyle Clegg (@kyle_clegg) July 26, 2012

 

gfiber2.png-fileId=19601385.png

The #EpicProvoAnnoucement hashtag on twitter has truly turned out to be epic. Provo is getting Google Fiber and it makes sense for so many reasons:

Screen Shot 2013-04-17 at 2.02.43 PM

Screen Shot 2013-04-17 at 2.02.43 PM

  • Entrepreneurship - Provo is one of the best places for tech startups outside of Silicon Valley -- on the same level as Austin.
  • Infrastructure - There's an existing infrastructure in place, put there by the city of Provo nearly 10 years ago. It failed (IMO) due to the greed of who decided to limit the speeds to near cable-levels and charge only marginally lower than other providers. Stifling innovation in the pursuit of some extra $$s.
  • Data - Genealogy research is huge in Utah, with several large organizations likely to get on board ASAP.
  • Students - Provo is home to a major university in BYU, with another 25,000+ university ten minutes away in UVU.
  • Innovation - Utah IS Silicon Slopes!

Company Culture

This resonates extremely well for me: "There’s still a traditional view out there that agile methods, hacking, open-source and new technologies don’t have a place in serious business. Our view is that all of those wonderful things power people, businesses, and society forward. We’re obsessed with how things work, are inspired by change, and simply love to build stuff. So we hire people and take on projects that let us do just that."

from http://www.controlgroup.com/careers.html

Ditching mySQL

As a java/OO developer first (web later), I got my start with databases by setting up a couple wordpress blogs, mostly simple UI stuff, but configuration and a few other cases got me into phpMyAdmin and MySQL. I wouldn't be surprised if this was the case for hundreds, or thousands of others. I don't -- or didn't -- mind MySQL so much because honestly it got the job done for those simple blogs and it was easy to get going. However I will say that now that I am surrounded by "production-level" projects, i.e. projects at work that affect millions of users and backends for my own mobile apps, I am extremely concerned about the performance, consistency, (over)complexity, and maintenance of my databases. I've gotten familiar with postgres, and while not fully understanding all its benefits over MySQL, it works great, feels sexy, and posts like this have pushed me to make the move.

Also, using frameworks like Ruby on Rails I feel abstracted far enough from the database level that the change really wasn't too difficult. It makes me wish I hadn't used MySQL in the first place, and started with SQLite because of its support on mobile devices, or Postgres.

Mobile App Competition Results

timeline.png

This post is long in the coming... actually should have been written at the end of November.  Brief recap about Growing Pains and how it took home some awesome awards in the BYU Mobile App Competition.  We had big plans for Growing Pains, but at the time of the submission deadline we were probably only 40-50% done with our first iteration feature set. I honestly was not expecting to take home much from the competition.  The one award I was fairly confident about was the best Ruby on Rails backend, mostly because I was guessing that we were one of the only RoR backends.

My sister Kandace and I were there and were pumped when they announced Growing Pains as a top-16 semifinalist out of 25, with a guaranteed $250 cash prize.  Dru couldn't make it because it was during the day and she would have to miss work.  All 16 semi-finalists gave a 2 minute demo and presentation on their app, which was exciting for me.  I've never made a pitch to 500 people before.

Then the awards... Kandace and I were super stoked when the first award they gave out -- BizVector award for business potential from MokiNetworks -- was given to Growing Pains!  $100 gift card.  Sweet!  Next up the finalists.  Again, the very first app they announced (which just added to the excitement and surprise) was Growing Pains, 5th place with a $1000 cash prize.  Other top apps included a couple games and 2 business productivity apps, with a top cash prize of $3000.  We also won the Ruby on Rails API award, which was a iPad for each team member.  In total we came away with $2100 in awards and prize money, plus a heavy dose of validation and encouragement about our idea and the direction Growing Pains was heading.

finalists.png

We've continued working on Growing Pains and recently started beta testing with couple family members.  If you're interested in giving us some pre-release feedback - let me know!

iOS Final Exam

In 3 hours do: 1) download and parse json for version number and location of zipped sqlite db file 2) if first time or newer version than previously downloaded, download zip using a progress indicator 3) save zip to device documents folder 4) decompress zip file and save 5) delete saved zip file 6) open connection to db, query last updated time and display

Recommended frameworks: afnetworking ziparchive sqlite

and the result.... http://screencast.com/t/UXm4kHuMnuq. i didn't have time to make it look pretty, but I finished all 6 tasks. it took every minute of those 3 hours, but I'm pretty proud of it. :0

Duplicate an SQL Record

When working with SQL databases there are times you want to clone database rows and for whatever reason don't want to write out a ton of INSERT statements.  This would be easily handled by

insert into users select * from user where username="webuser1";

except that this will not handle unique key contraints, i.e. when your ssn or user_id fields must remain unique.  One convenient way to get around the restrictions on unique keys it to create a temporary table, clone the record, change necessary fields, then copy is back to the original table.
CREATE TEMPORARY TABLE users2 ENGINE=MEMORY SELECT * FROM users 
WHERE username="webuser1";
UPDATE users2 SET username=webuser2; ## Change the username to be unique
## Update any other fields that must be unique
INSERT INTO users SELECT * FROM users2;
DROP TABLE users2;

Tablet or Laptop

A past professor recently asked island (the information systems forum/community I participate in) whether he should get a tablet or a laptop for his teenage daughter.  She's been asking for a tablet, and he wants a solution that will work for her for both play and for school and homework.  It's an interesting question and very interesting topic since laptops and tablets are accepted for use in many high schools across the US.  When I was in high school (2001-2005) using a laptop at school wasn't even a consideration.  I wonder what kids will have in 30 years...

Here are my recommendations at around the $300 price:
  1. Nexus 10
  2. iPad 2
  3. Windows Surface (with condition)

The Nexus 10 is going to be pretty ideal for this situation.  I think a high schooler can very reasonably do their studying and homework on a tablet, with the exception of research papers and essays.  For those he or she may need to use the family computer (or at least a keyboard dock), but I wouldn't consider that case enough reasoning to go with a laptop instead of a tablet, especially if they're already wanting a tablet.  I have a Nexus 7 and I really like it for reading and for taking notes.  I love Evernote because it's on every platform I could dream of using (web, iOS, Android, Amazon, OS X, Windows, and even Windows Phone, but not Linux AFAIK).

As another option I think you can get a 16GB iPad 2 in that price range, which would be a great choice as well.  It also wouldn't be brand new, which may or may not be important to a parent.  I know some would rather their teenagers have a older generation version of a project than something fresh off the shelves.
About the Microsoft Surface... if it were 1 year from now, I would recommend getting a used Surface.  It would definitely have all the windows based functionality that someone could need.  However because it just was released Friday and its price point is $499, I don't think it's a good option today.  But it's worth mentioning.

The new Copyright Alert System

Relating to my Information Security class (and just listening to local news driving home from school), I've recently heard quite a bit about the new Copyright Alert System.  I decided to do a little reading and learn more about it.  A lot of my comments come from reading this hackernews article... http://thehackernews.com/2012/10/isps-will-warn-you-about-pirate-content.html#sthash.hqrC94wn.dpbs. The Copyright Alert System (CAS) will begin showing up in the U.S. in late 2012, according to the U.S. Center for Copyright Information. The new Copyright Alert System has partnered with Internet Service Providers (ISPs) such as AT&T, Cablevision, Comcast, Time Warner Cable, and Verizon to deter subscribers from infringement over peer-to-peer networks. Providers’ implementation may vary, but their respective flavors of ISPs are expected to roll out within the next two months.

The new system works by monitoring illegal transferring and downloading of copyrighted files using MarkMonitor, a brand protection company, and issues warnings for infractions. Gradually more severe responses are given to each subsequent infringement, beginning with emailed warnings, escalating to throttled data speeds, and for more serious offenders suspension of service and possible legal action, including severe fines. In addition to protecting original content creators and owners, the CAS system also benefits the ISPs. If accused of illegal activity, offenders can request a review of their network activity by paying a $35 fee. If the offender is found not guilty, their money will be refunded. If they are found guilty, the fee will be kept.

The Center for Copyright Information applauds the new system, saying that it is “designed to make consumers aware of activity that has occurred using their Internet accounts, educate them on how they can prevent such activity from happening again, and provide information about the growing number of ways to access digital content legally.”

“Contrary to many erroneous reports, this is not a ‘six-strikes-and-you’re-out’ system that would result in termination,” the group said in a press release. “There's no ‘strikeout’ in this program.” However, apparently there is some controversy here, because there are rumors of a six-strike limit, yet no given policy on what happens if people continue to download or share pirated files, even after six warnings.

Assets to the Copyright Alert System 1. MarkMonitor – System the monitors network activity to copyrighted media and can detected the illegal sharing and downloading of copyrighted files. Goal – prevent end users from abusing ease of online information exchange by monitoring for illegal activity. 2. ISPs – Previously, identifying illegal downloaders was up to the content owner. ISPs will now play a large role in enforcing this. Goal – Since ISPs have access to all network activity, they can more accurately detect infringers and better penalize users for their negligence or purposeful illegal activity.

Threats to Online Media 1. End users downloading illegal media, such as music. Although this attacker is not the average black hat haxor, this person is still an “attacker” in the sense that they are performing illegal activity. 2. Services that promoting sharing of illegal media, then gain revenue through advertisements on their website. E.G. Megaupload

Weaknesses to Copyright Alert System 1. Users can still transfer copyrighted material via USB or firewire, or some connection not monitored by the ISP. 2. Software that cracks copyrights, which would prevent MarkMonitor from detecting the illegal sharing and downloading.

We should all be aware of the issue on online piracy and how to share media within the confines on the law. Piracy laws are in place to protect businesses and individuals, and as an IT generation and information consumer, we should be aware of the latest technologies in information security from protecting enterprises with hardware or software to protecting content creators with the Copyright Alert System.

Web tracking firm, Compete, settles charges for illegally collecting sensitive user data

I recently read an article published by Ars Technica, one of my favorite websites for tech news, education, and product reviews, as a part of a school assignment and wanted to post my thoughts. I’ve found that Ars Technica is one of the more intelligent and educational technology blogs that exist, which is very refreshing in a web full of tech blogs that want your clicks and try to attract you with gimmicky headlines and juicy gossip. The article is titled "Web tracking firm settles charges it collected passwords, financial data" and recounts the recent happenings surrounding Compete Inc. and their abuse of data tracking, lawsuit, and subsequent settlement. The article was published on 10/22/12 and can be found at http://arstechnica.com/tech-policy/2012/10/web-tracking-firm-settles- charges-it-collected-passwords-financial-data/.

The Massachusetts-based company had agreed to obtain end users’ consent before collecting future data on their browsing history, and had also agreed to anonymize customer data. The Federal Trade Commission (FTC) filed charges against Compete relating to a toolbar that gave consumers “instant access” to information about the websites they visited, as well as a second software package called the Consumer Input Panel that gave consumers the opportunity to win rewards for expressing opinions about products and services. Both software packages did more than they advertised, said the FTC. "In fact, Compete collected more than browsing behavior or addresses of webpages," FTC lawyers wrote in a civil complaint filed in the case. "It collected extensive information about consumers' online activities and transmitted the information in clear readable text to Compete's servers. The data collected included information about all websites visited, all links followed, and the advertisements displayed with the consumer was on a given webpage."

Compete began collecting credit card numbers, social security numbers, and other sensitive data as early as January 2006, and has now agreed to settle up with the charges made against them. The article does not list an amount that Compete must pay, or any other details on what their penalty will be, or what will be done with the data sitting in Compete’s databases, but it does say that Compete will settle.

This article describes a situation that unfortunately is somewhat common. In some cases, the company is identified and brought to court, in other cases it likely goes undetected. All internet users should be aware of the risks of using technology, and specifically in using third-party tools in addition to their web browser. We should be wary of unneeded plug-ins, toolbars, widgets, and other applications that serve a minute purpose and have little industry credibility. In my opinion, trusting company’s like Google and Facebook is a much safer approach to protecting your data and you identity online, because these industry leaders are extremely transparent in how their manage end-users’ data and are increasingly under the microscope in terms of what they do with that data. This scrutiny helps create strict policies and regulations that help protect our data.