OmniGraffle 7.11 for Mac is a very important update — it includes significant performance improvements. There’s no silver bullet for making an app faster: there is only figuring out which things are slow, figuring out how to speed those things up, figuring out how to measure progress — and then making sure no new bugs were added. It’s a lot of work!
In this special episode we have two separate interviews. The first is with OmniGraffle engineers Rey Worthington and Shannon Hughes. The second is with Ken Case, CEO; Tim Wood, CTO; and Dan Walker, OmniGraffle PM. These interviews were recorded while the work was in progress but pretty far along.
We talk about how we found the slowdowns, how we fixed them, and how we wrote tests to make sure we know we’re making OmniGraffle faster.
Brent Simmons: You're listening to The Omni Show. Get to know the people and stories behind The Omni Group's award-winning productivity apps for Mac and iOS. Music.
SFX: [MUSIC PLAYS]
Brent Simmons: I'm your host, Brent Simmons. This is a special two part episode on OmniGraffle Performance Enhancements, which just shipped on Mac and iOS. Later on we'll be talking with Ken Case, Tim Wood and Dan Walker. But let's begin with two OmniGraffle engineers, Rey Worthington and Shannon Hughes. Say hello, Shannon.
Shannon Hughes: Hello Shannon.
Brent Simmons: Say hello, Rey.
Rey Worthington: Hello Rey.
Brent Simmons: The two of you and these other folks have been working on OmniGraffle performance. We'll start with Shannon. What have you been working on?
Shannon Hughes: Okay, so one of the things that we discovered early on was that just opening a really large document, that one of our customers sent in, was taking a long time.
Brent Simmons: Large in terms of lots of elements to it?
Shannon Hughes: Yes, I think there were a whole lot of canvases and also a lot of graphics on each canvas. And we discovered that a lot of the time that we were taking was actually in building arrays that we then were throwing away. Took a little bit of time to figure out where that was coming from, but it was when we were making the sidebar list of all the graphics. We were asking the layer for its array of graphics so that we could list them in the sidebar, but it turns out that when we ask a layer for its graphics, we make a temporary copy of the graphics and hand it back.
Brent Simmons: Oh, okay.
Shannon Hughes: Right. So that was work that we did not in this case need to be doing. There's a few places in the app where we do need to be making that copy, but for the most part, in most places, we don't. So we solved that by not doing that work.
Brent Simmons: Right. That's the answer for all performance — Don't do the work.
Shannon Hughes: Yes, exactly.
Brent Simmons: How did you not do that work?
Shannon Hughes: How did we not do that work? Let's get, well, first of all, because we still did need that API in some places we couldn't do away with it entirely, but we also didn't want to run into this problem again. Hopefully. We renamed that method instead of just calling it
layerGraphics. It's now called
tempArrayOfLayerGraphics, so that, if you decide to call it, you realize that you might be doing something expensive. Right?
Brent Simmons: Sure.
Shannon Hughes: And then we made some new API that is
enumerateLayerGraphicsUsingBlock so that we encourage you to just give us a thing to do to all these graphics rather than asking for the whole array.
Brent Simmons: It sounds like a very modern solution, using a block and enumeration.
Shannon Hughes: We are very modern here at Omni.
Brent Simmons: Yeah, that's cool. We're writing in Swift these days, I imagine. Sometimes? A lot of times?
Shannon Hughes: Sometimes yes, sometimes.
Rey Worthington: We're, I don't know, 30/70 Swift/Objective-C these days.
Shannon Hughes: Maybe. Yeah. I don't know.
Brent Simmons: You mean the entire code base or...
Rey Worthington: In the Graffle…
Brent Simmons: Or if … code written that day?
Rey Worthington: Graffle.
Brent Simmons: In the codebase of Graffle.
Rey Worthington: In the codebase. Yeah.
Shannon Hughes: And that's one problem we ran into actually...
Rey Worthington: Yeah.
Shannon Hughes: with arrays.
Rey Worthington: Yeah. Because it turns out that it is not free to switch between an Objective-C Array and a Swift array, moving arrays of information between one and the other language had a cost that was significant and was slowing us down in some places. That was part of what prompted the layer graphic performance solution there.
Brent Simmons: So the Swift to Objective-C bridging was part of the issue.
Rey Worthington: Yeah. So as a result, now we can say "here's the work to do," and the layer can use the graphics that it already has, rather than making copies and sending them off to be manipulated.
Shannon Hughes: Yeah. And because we were just saying, enumerate the thing, like enumerate these graphics and do the thing with them, right. Then that array can stay in whatever language it is in. The block is what gets transferred over. I forgot about that.
Rey Worthington: That's what I was trying to say. Thank you.
Brent Simmons: All right. So yeah, you skipped bridging. You skipped the copy.
Shannon Hughes: Awesome.
Brent Simmons: That's the way to go.
Shannon Hughes: So much better.
Brent Simmons: Nice. I liked that you renamed it too, that you could've just called it
Shannon Hughes: I really like to give things scary names when you shouldn't be using them very often.
Rey Worthington: It's been really helpful. Like it makes it a lot clearer when you're reading things like, "okay, I have to stop and think about this because this is making copies and it could be making copies of a lot of things." Especially — one of the coolest things with all of this performance work that we've been doing has been collecting and looking at user documents that people have sent in with notes, "This is slow when I do this." I just don't even imagine half the things that some of our users are doing with our application. It's awe inspiring.
Brent Simmons: There was a guy working on a book of art, and I probably can't say more than this about it, but I've seen it. I think we've probably all seen it, and it's massive, and massively impressive.
Shannon Hughes: It's amazing. Yeah.
Brent Simmons: Have you guys been using that as part of the performance... ?
Shannon Hughes: Absolutely.
Rey Worthington: It's in four separate performance tests now at this point. And it's like, you only have to say the title of this document and like the whole team is like, oh that guy. But it was really cool because—
Brent Simmons: Because we love what he's been able to do.
Rey Worthington: Right!
Shannon Hughes: And making the performance improvements and seeing them in that document has been the most gratifying work of this whole performance.
Rey Worthington: And it's cool both because he's got this document that's sizeable, but it's actually exercising a couple of different pain points that we had been trying to alleviate. So it was just like this perfect storm of challenges, but places where we could improve. And we released our public Beta last week.
Brent Simmons: Which, when this comes out, is probably four weeks ago.
Rey Worthington: Wow. Fair. But we got feedback from the author of that document pretty quickly that—
Brent Simmons: They were happy.
Rey Worthington: Oh yeah. They had just said, "This is so much better. Thank you so much."
Brent Simmons: That's great.
Rey Worthington: And like that was, that was extremely exciting.
Shannon Hughes: Yeah. There was so much celebration in the chat. We were all very happy.
Rey Worthington: It's weird because we're—
Brent Simmons: You have literally seen it get better with his actual document.
Rey Worthington: Right. It's like when you buy a present for somebody and you're just waiting for them to open it. Right? I'm just waiting for him to run it and I hope he tells me what he does. Right? That's always the challenge. Some people are really into working with you to track down a bug and some people are like, "hey, I told you it was wrong and now this is no longer my problem." And that's fair. But at the same time it's really nice to get that feedback that's like, "yeah, you what you did actually helped me."
Brent Simmons: That's cool.
Shannon Hughes: It was also really nice because, as we've been working on this performance milestone, we've also been building tests or ... can we go there now?
Brent Simmons: I was... Let's put a pin in that, I hadn't asked what you specifically had been working on.
Rey Worthington: So a lot of what I've been doing has been kind of in support of Shannon and Ryan's work more than huge gains or things that I've been doing off in my own corner. So at the start of the target, I spent a lot of time trying to find where our pain points were. So a lot of time opening up these user documents in Instruments and trying to find the heavy back traces and file bugs on that. And then as the target has moved on and we've started making some of these improvements, it's been making sure that our tests are existing and reflecting both what they say they're going to test and making sure that that's really what they're testing.
Rey Worthington: So talking about how to document what we're doing properly and making sure that the tools that we're using are useful. And so that's been a lot of feedback on the work that Tim wood has been doing, about building up a test rig that always runs on the same hardware and that has a repeatable test environment so that gains are visible and meaningful, which is always a challenge with testing, especially with performance testing because if you run it on your local machine and you have an especially heavy webpage in another window, suddenly everything is different. Why is this so much slower? Oh, well because I had YouTube in another window, right? Oh and I, I didn't even think about it or whatever it is.
Brent Simmons: We're not just doing some XCTest performance measurements. There's a whole new test apparatus going on here?
Shannon Hughes: We are writing the tests in with the XCTest framework and Xcode and we can run them there.
Rey Worthington: We've got a superstructure around the XCTests and the reporting of the performance from them is in our own structure and it's built into [Omni]AutoBuild, which is our... OmniAutoBuild is our internal app that shows us the status of all of our build servers, whether they're green or not for all of our apps, and also lets us request new builds. And so now it's also reporting test performance, which makes sense because there's targets for tests and there have been for a long time. So this is just an extra view on some of that build information.
Shannon Hughes: And now it will tell us that we are failing if we have gone below our performance benchmark on any of our tests.
Brent Simmons: And so you mentioned a critical part of this is consistent hardware, and so it's only running on a subset of our build servers?
Rey Worthington: It is running on one server.
Brent Simmons: One server, okay.
Rey Worthington: We have one server that is the blessed one.
Brent Simmons: Because it would be consistent then.
Rey Worthington: And since just a build server and it's only building one thing at once. We know that we have arguably constant hardware and current activity status.
Brent Simmons: Right. No one's using it to watch YouTube.
Rey Worthington: Nobody's playing Minecraft in another window, or at least not that I know of.
Brent Simmons: Rey, you had told a story about an undo bug you'd been working on, and the tests. What was that about?
Rey Worthington: Yeah. One of the latest things that I had been in pursuit of — and this was my, "Okay, Shannon and Ryan have both made these great gains in performance. This is the one I'm going to figure out." — involved moving a group of lines. And when I say a group of lines—
Shannon Hughes: How many lines? How many lines was it?
Rey Worthington: Yes! This is a user document, and it was a diagram of a catwalk. Think like a crossed metal girder kind of situation. There were 6,900 lines in this group.
Brent Simmons: That's a lot of lines. It ought to be able to handle it, though.
Rey Worthington: It looks really cool. And it was all grouped together so you can move it as one unit and use it to diagram whatever stage setup they had working on or whatever. I don't know what the larger goal was, but this, this was the part of the document that was causing trouble. And the problem was when you moved to the group and then undid the move, that was really slow.
Rey Worthington: And so I, being the responsible performance tester that I have become — yes, become, that's the key word there — I wrote a test that moved the group and then undid it, and I saw the unfortunately subpar performance that I had expected, and then I made a fix, and then the test ran again and nothing changed, and I was so disappointed.
Brent Simmons: Now had you verified manually that it was faster?
Rey Worthington: I had! Right? So I had run the application in Instruments — I had theorized this is going to improve my performance. I had run in Instruments, I had demonstrated to myself that this did improve performance when I was doing the user steps of doing the drag and hitting ⌘-Z undo. But the test wasn't reproducing this.
Brent Simmons: Grrr.
Rey Worthington: Exactly. So I was stuck in this situation where I was like, I have a couple different answers here. One, the test isn't testing what I think it is. Two, the fix isn't fixing what I think it is. Three, some combination of... something else horrible is happening. And so I ended up really closely introspecting the drag code to see maybe I'm not calling the same API to move the graphic group that the actual drag code does, and it turned out it wasn't. So I wrote another test, and it didn't change either, and then I was really upset. So long story short, it turned out that the undo mechanism that I had increased the performance in, was at a high enough level that I hadn't caught it in my test, like that part wasn't being invoked.
Brent Simmons: Okay, so you're testing something lower down.
Rey Worthington: Yeah, exactly. And it turns out that it was a two line fix to my test. Right? And it's running as we speak on the one blessed server somewhere in a server room. So hopefully when we're done here I'll be able to look at the results and finally see that line drop and be able to do my little victory dance. Hopefully.
Brent Simmons: That's why we should be a video podcast cause we'd follow you and—
Rey Worthington: No. No. Nobody needs to see that, Brent.
Brent Simmons: What about our audience?
Rey Worthington: That's part of nobody.
Brent Simmons: When we started to work on OmniGraffle performance, did we set up all this testing stuff in advance, or just kind of dive right in.
Shannon Hughes: So we knew that we needed tests. Right? And we had a meeting. We all agreed we will not work on fixing a bug until we have written a test to catch this performance problem. Right? We were all in agreement about that.
Rey Worthington: I think so. It didn't exactly work that way.
Shannon Hughes: Yeah. I give us what, maybe a B plus on that, for this milestone. We've done pretty well. Pretty well. Sometimes it's just too tempting and you just forget.
Rey Worthington: Oh yeah.
Shannon Hughes: Yeah. And then the whole system that Tim's done on the build servers has been sort of developed in parallel.
Rey Worthington: Right. There's been a lot of feedback about, okay, we seem to be capturing more... The base XCode performance tests weren't accurately capturing some of the work we were doing because of performing after delays and sending things off to happen on other threads. And so work was happening that wasn't getting measured. And so a lot of our exoskeleton around the test is making sure that that is all getting counted in, in our measurements so that we actually see the cost of what we're doing and properly account for it. So that was, I think, a large part of Tim's goals. But at the same time we were saying, "Hey, we need to test everything," Tim was saying, "Well our tests aren't going to reflect reality properly yet." And so while we got started with figuring out what our problems were and theorizing about what some of the bigger fixes might be, he was busy building up some of that testing infrastructure so that we could later improve it, which in turn means that some of our big gains, some of our early fixes, didn't get captured in our data because the data wasn't there yet.
Shannon Hughes: They do have like commit messages where we ran them on our local machines without the fix and with the fix, and can say like this got 50% faster.
Brent Simmons: Still, though, it's like not getting credit for your steps.
Rey Worthington: Right?
Shannon Hughes: It feels like it, which is part of why it was so great to get feedback from the customer, to be reminded that, oh, right, since we started this whole process, we have made some really big gains. Even though our current measurement system has only captured the tail end, so it looks less impressive.
Rey Worthington: Yeah. I think a change that I made and a change that Ryan made, between the two of them, had the document opening in like a 20th of the time or something absurd like that.
Brent Simmons: Wow, that's great.
Rey Worthington: And neither of those got captured in our historical data. And because of the way that the test development went, it's not easily possible to go back and measure where we were at at the very beginning.
Brent Simmons: Oh, right, sure.
Rey Worthington: So we're kind of at this point where we're just shrugging and saying, "Yeah, we did good. It's fine."
Shannon Hughes: And going forward we will be more rigorous.
Brent Simmons: Yeah, sure.
Rey Worthington: Right. And I mean this—
Brent Simmons: Got this all set up. That's cool.
Rey Worthington: But it'll be, it'll be interesting to see how things go. I made a change this morning, actually, to the build rig, this is the first one I have made. So I was talking to Tim like, "Okay, I'm going to change the thing. Is it, are we okay with me changing the thing? Yeah. Okay." Where we weren't quite making sure that we were at a consistent place before we started measuring sometimes on some tests.
Brent Simmons: Oh and you need like a clean slate or whatever.
Rey Worthington: Exactly. So I had tests where the result was the very first time we ran the thing that we were measuring. It took a lot of time. And then for each of the subsequent runs it took a very little amount of time. So my average was coming in just about right. But if I made sure that we were at our good state before we started, suddenly my maximum time drops significantly and my average dropped a little bit.
Rey Worthington: So today's test results are slightly better than yesterday's. I was pretty happy about it. It's always nice to have, to make a change and say, I think this is going to have a positive effect and then see it have a positive effect.
Brent Simmons: Yeah. Right. And the days when it doesn't, or it's the opposite, [sigh] I just go to bed early.
Rey Worthington: We had one of those, actually. And it was really unfortunate because of the timing of what had happened. Ryan had made a change and I had made a change that I thought was possible to have caused problems. And so I spent a lot of time trying to figure out, and it turned out our tests got a lot worse, and I thought, "Oh no, I must've done this completely wrong." And it turned out that Ryan's change was not performant in a particular situation that he hadn't tested. And I had found it with my test that I had just added and thought, "Oh, I've written the test wrong." No, I caught a problem!
Brent Simmons: Yes. That's cool.
Rey Worthington: So once we figured it out, it was great and, and we were able to back out that change and fix the problem that he was looking at in another way. So the these tests are actually, they've paid for themselves already.
Shannon Hughes: There was also the story of when we were a part way through fixing a major thing, changing how we replace text variables, which we weren't caching those results at all in the past, mostly because the cache invalidation problem is very tricky.
Brent Simmons: I've heard of that.
Shannon Hughes: Yeah. In this particular case, it's more tricky than usual. So now we were trying to cache them and the intermediate step meant that we weren't replacing them at all. We were never updating our cache.
Brent Simmons: So now it's fast!
Shannon Hughes: So it was real fast and that's when Tim set the baseline for the test.
Brent Simmons: So your baseline is, be as fast as if you don't do it at all.
Shannon Hughes: As if you were doing nothing. Yeah. Just don't do it at all, do it perfectly. But as quickly as if you were doing nothing.
Rey Worthington: It turns out that's not possible.
Brent Simmons: You're getting close?
Shannon Hughes: Yeah. Well I, I, you know... It's better than it was.
Brent Simmons: I mean...
Rey Worthington: And this is one—
Brent Simmons: Bottom line is, it's CPU. It's where it has to happen.
Rey Worthington: This is one instance where we're going to need to adjust the baseline a little bit to account for that work that does have to happen actually happening, but there's a lot less of it happening than there used to be.
Brent Simmons: Good. Good, good, good. Well, I think that pretty much covers things. Unless I'm forgetting something I should be asking you two about?
Rey Worthington: We're swift as the wind and want you to know it.
Brent Simmons: OmniGraffle: way faster. Everyone get it.
Shannon Hughes: And also going to continue being faster in the future now that we have this great setup, set up to test.
Brent Simmons: And so for one thing, you'll catch it. If something slows down, you'll catch it right away.
Shannon Hughes: And we won't ship it.
Rey Worthington: Or at least we'll be able to sit and talk about, okay, are we okay with this going slightly slower?
Brent Simmons: Sure. There are always trade offs. If it like makes something else blazing fast.
Rey Worthington: Exactly. Yeah. Like is it the thing that very rarely gets done, is that okay that that's slow?
Shannon Hughes: Or if we were just doing it totally wrong or not doing it at all.
Rey Worthington: Exactly. Sometimes you gotta take time to figure things out. The other nice thing, though, is that since we've gone through the growing pains of getting all of this set up, that means the rest of our apps can grow into having performance tests. I know that Tom was starting to look at using the same rig for some [Omni]Plan performance tests. I don't know how far that's gone, but he asked me questions.
Brent Simmons: Every app can use...
Rey Worthington: Exactly.
Brent Simmons: Better tests and better performance. And that was part of our roadmap this year. We're going to do that. Here we are, doing it.
Rey Worthington: Woo! Good job, us.
Brent Simmons: Yeah. All right. Well thank you Shannon.
Shannon Hughes: Thank you.
Brent Simmons: Thank you, Rey.
Rey Worthington: You're very welcome.
Brent Simmons: And we'll move onto the next segment. In the studio we have Ken Case, Tim Wood and Dan Walker. Dan Walker is new to the show. He's the OmniGraffle prime minister. Say hello, Dan.
Dan Walker: Hello Dan.
Brent Simmons: Say Hello, Ken.
Ken Case: Hello Ken.
Brent Simmons: Say Hello, Tim.
Tim Wood: Hello Tim.
Brent Simmons: So I'll start with the obvious question. Semi-obvious is, why are we working on performance enhancements? Why have we, what was the impetus for this?
Ken Case: We don't like for things to be slow.
Brent Simmons: Super good answer. Yeah. But how did things get slow? What brought us to here?
Ken Case: So, we wrote OmniGraffle, what really, yeah, about 18 years ago now, 19 years ago. And at that time screens were less dense. You didn't have retina screens, so there were fewer pixels you were pushing around that way. There were fewer pixels coming on the input side as well because you didn't have a super high res images coming from cameras or from screenshots even, and all of that is more than it used to be. And the colors have gotten deeper, so even if you had the same number of pixels per inch, you would have deeper colors and that's more bits per pixel, more pixels per inch. And it all adds up. So that's one thing that's happened.
Ken Case: The drawing technology has changed a bit. And over time, the layers that we've been using to draw didn't necessarily match up with everything else. And we did some big optimizations when we did "iPad or Bust" and brought OmniGraffle to iPad in 2010, and we thought maybe this is a good time to bring some of those optimizations back to the Mac side of things.
Brent Simmons: Oh, okay. Yeah. I imagine that was probably a significant amount of work, back in 2010, bringing OmniGraffle to the iPad in the first place.
Ken Case: It was a much more limited device than it is now, in terms of relative horsepower.
Brent Simmons: Wasn't even a retina screen, right, at first? Yeah.
Tim Wood: Definitely not.
Ken Case: Tiny amount of memory, slow CPU, relatively speaking.
Brent Simmons: Well, what's the process? How did we go about identifying the things that needed to be faster? Knowing that machines have changed, the context has changed. There are still specific things that need to be sped up. Do we have a process for that?
Dan Walker: One of the things that we've noticed is that when people send us sample documents, people who are able to share these sorts of things, that really helps us out, so we can see both the performance problems that they may be trying to share with us, but also it gives us a little glimpse into what they're actually doing with the app. And one of the things we noticed was that people are putting in larger and larger images into their documents itself. People take large background images of maps or of building floor plans and things like that. They'll bring that into Graffle and then they'll use Graffle to overlay icons or updates to those images over the top. And we were starting to notice that our performance with those very large images was not meeting our standards and people were bringing in larger and larger images as the world started supporting these larger images.
Brent Simmons: So even an iPhone having a bigger and better camera can affect OmniGraffle performance because people are more likely to use those pictures. Were there areas besides large images that were particularly performance sensitive?
Dan Walker: The other area that we noticed was opening documents with a lot of canvases or large groups, large amounts of content on the screen, like the number of objects on the canvas, those sorts of things were starting to trail off in performance.
Brent Simmons: Tim, how did we make images faster?
Tim Wood: Well, our engineering team spent a bunch of work trying out various optimizations for pre-rendering images in a tiled fashion, so they're scaled to exactly what the device needs for quick rendering.
Brent Simmons: So was the tiling similar to— I think of the early days of iOS, the first version of Safari would draw like part of a webpage.
Tim Wood: Right.
Brent Simmons: Was there a checkerboard or something. I don't recall exactly, but it wouldn't draw all of it. Tile it that way.
Tim Wood: We don't do asynchronous tiling in that sense. So you don't ever see a checkerboard, but we do, I believe, I'd have to double check it, ended up doing tiling across multiple cores. The big advantage is doing that up front so that when you're scrolling around you're not paying a penalty for resizing the image while you're drawing it, that sort of thing.
Ken Case: We also even thought about how to make it more obvious to the user when they're doing something that would slow things down, like updating our image inspector to show you how big the image is, that is attached to a shape. So you realize that you've attached this 20 megabyte image and maybe it's going to slow things down and you can optimize it differently.
Brent Simmons: So I imagine beyond the big thing, there were probably also a lot of very, very small performance enhancements, but we've seen how that can really pile up and help an app.
Tim Wood: Yeah, there's performance enhancements sort of scattered throughout the app. A lot of work was done to unify updates to the inspectors as changes are made. Just one example of: we have a test where you select a bunch of objects with text on them and hit ⌘-B to bold, the inspectors happened to be recalculating what the correct bold font was over and over for the same source font. Something that's super fast for you to do it once, but if you do it for a thousand objects, it starts to add up.
Brent Simmons: So how did we find specifically in the code where the slowdowns were? Did we just use Instruments like people tend to?
Tim Wood: Yeah. That's pretty much the starting point is use Instruments. Some things are perfectly obvious when you do that. They just bubble right to the top. Other things are sort of scattered throughout the app and are a little harder to recognize. One of the tricks for tracking down some of those, though, is instead of running with the Time Profiler instrument, running with the Allocations instrument. One of the things that's really easy to miss is allocations of temporary objects. Getting rid of those can be a huge time savings.
Brent Simmons: Back when the iPhone was new, one of my cardinal rules of programming was just don't allocate any memory at all.
Tim Wood: It's the best way to be safe.
Brent Simmons: Obviously, I have to break that a little bit, but it's still, it could still add up. Yeah. Especially with a large Graffle document with a lot of stuff on it. Yeah, it makes sense. But that's the good note. Use the Allocations profiler.
Tim Wood: And particularly, the Allocations instrument has a column for transient allocations, which some people don't seem to know about, which are objects that are allocated and then deallocated within some period of time.
Brent Simmons: So then how did we measure success? So a specific problem, we think is solved. How did we determine that that's true?
Tim Wood: Well. The first step is once you've made an improvement, don't lose it. So we extended Apple's
XCTestCase support for doing performance measurements, so that performance results are sent over to our build system and recorded there. And then we have an app that lets us view historical trends, and the build system also, if a test is slower than it's supposed to be, we'll fail the build and we go look at it and figure out what's going on.
Brent Simmons: So are all these tests just basically in code or is it launching OmniGraffle and doing some set of instructions?
Tim Wood: Some of the tests are easy enough to do in small headless test cases, but a lot of the areas that we need to check for performance issues are at all levels of the app. The inspectors, rendering, scrolling, actually pretending to click and switch canvases. So we do have a Mac Pro in our server farm that, there's a user logged in, and every couple of minutes Graffle fires up and starts doing strange things and...
Brent Simmons: Right, sure. So Ken, how did we watch for regressions? Were the tests able to cover all those kinds of things or...?
Ken Case: Well, it depends on which kind of regressions you have in mind. So the—
Brent Simmons: I was thinking of, if we're to touching a whole lot of code—
Ken Case: Functionality regressions.
Brent Simmons: Yes, right. Then...
Ken Case: Mostly that involved manual testing. We have some automated tests in place that will catch some of the issues and I think actually — maybe we'll hear from Dan on some of this in a little bit — but a big piece was just inviting customers to start testing with us and then telling us what they noticed that was broken. Maybe labels not updating properly, or whatever.
Brent Simmons: So Dan, of course you work closely with the testers, running the test bashes and such. How has that gone? Has the performance stuff been able to get in pretty well without causing a lot of backtracking or...?
Dan Walker: It definitely creates a bigger load for the testers. When we decide to touch text or touch images or something like that, it's a large scope to try and retest and re-cover. We've got some tests in place for things like SVG, which have been very helpful in telling us if we've regressed in those areas. But—
Brent Simmons: So is this a test of SVG support, or are you using SVG for testing in some other way?
Dan Walker: This is testing our import and export functionality. So, import some SVG, compare it to what it was supposed to look like, or export it...
Ken Case: And similarly we have a suite of old documents from OmniGraffle 4, even, or earlier from some of those transitions, where we had a set of documents and some AppleScript that would go through and take one of those documents and export each of its canvases to an image. And so I updated that pipeline to let us compare the output of earlier versions of the app with the current version, and let us know where exports had unexpectedly differed. And that's a manual process because things like shadow rendering do change from release to release and with pixel densities and color densities and so on. So it's not always going to be exact, but there's a bitmap comparison tool that will mark changes in varying values of red. So that was an easy way to kind of see, okay, these things have shifted out and that text is not where it's supposed to be anymore.
Dan Walker: The beauty of the automated test going forward is, like the SVG has proven before, these performance tests will keep us in line and make sure that our performance at least stays at this baseline, if not improves in the future.
Brent Simmons: And then screens will go to like 4X or something, and then we'll have to do it all over again. So this work, though you even extended all the way out — Dan, you mentioned — into the size of the package. You don't have separate localized template files anymore, instead a strings dictionary and swapping in and...
Dan Walker: Yeah. Previously we had a copy for each locale that we were localized in. We would have a copy of every sample document, every template, stencil and thing that we've bundled with the app, and it happened to work well with performance to merge all those together into a single template and stencil file with strings files embedded in it. So it was just capturing the actual differences in the file.
Dan Walker: That was also partially for my own sanity, since I manage all those files, and every time we would make just a subtle change to the document itself, I would then have to go update every English version of that, and then go update in the other 10 languages that we support, so quickly works out to hundreds of files that I'm touching. And so I'm quite happy with this update.
Brent Simmons: Yeah, so we're in English and 10 other languages. That's... I think that's standard across our apps. Is that right? Yeah, same set. Yeah. Which I notice every time I have to do localized screenshots for Apple. So even the size of the app package is down. That's a cool thing. Anything I'm forgetting about performance?
Ken Case: Well, if people out there have had a chance to look at this release, even after we've shipped. We still want to hear from people. Fire it up. Tell us if you notice any — Of course if there are any functionality regressions, we'd love to hear about it. As we make performance changes, sometimes something will slip through the cracks. And if we don't hear from anybody telling us about it, then you might be the first person to let us know. But also, of course, if you see any areas that are still slower than you would expect, we would love to hear from you and find out what we can do to improve those use cases.
Brent Simmons: Sounds good. There is an anonymizing function in the app so you can swap out your text and pictures.
Ken Case: Yeah, so if you have the document open in OmniGraffle and you go to the Help menu and choose Contact Omni, it will offer to send an email message to us. Well, it will prompt and ask, "Would you like to send in an anonymized copy of your document?" as well, and if you do that, it will swap all of the text around, replacing all of the words with just X's or something, and the images get replaced with a stock built-in image that has nothing to do with your confidential images. However, that said, because that image is now just the stock anonymized image, it may not have the same performance characteristics that your original images did.
Ken Case: So, you can check and see whether the anonymized copy... It will open an email message that has this attachment. Before you even send it, you could open that document and see whether it is still has the same problem that you were trying to demonstrate to us, and if not, then if you can find some way to reproduce the problem, whether it's putting in an original image in there, or another one sort of like it, similar characteristics, size and depth and so on, then that would be really helpful to us.
Brent Simmons: After all, performance is always a thing that we can always work on and make better.
Ken Case: I mean, people always love for things to go faster. I know I do.
Tim Wood: If we make it work well with 500 canvases, then somebody will start putting in a thousand.
Brent Simmons: Yeah, that's—
Ken Case: That's already happened.
Dan Walker: We're still surprised by the user that actually has 500 canvases, so send that in and...
Brent Simmons: Wow, do they have the new Mac Pro?
Dan Walker: They won't need one now, with the performance improvements.
Brent Simmons: There you go — we just saved them like $10,000! That's pretty cool.
Dan Walker: But yeah, those are the kind of files that we use now for our test cases.
Brent Simmons: Sounds good. All right. Well, thanks Ken. Thanks, Tim. Thanks, Dan. I'd also like to thank our intrepid producer, Mark Boszko. Say hello, Mark.
Mark Boszko: Hello Mark.
Brent Simmons: And especially, I want to thank you for listening. Thank you. Music!
SFX: [MUSIC PLAYS]