Conference fatigue

I’ve just finished a week of conferences on electronic visaulisation ans the arts and digital humanities.  It’s been a long week, but it has been a rewarding one.  There’s a vast amount of work going on, and most of it is clever and valuable.

There were some major themes that emerged from various discussions that went on this week.  One of them was that a great deal of computational work in the humanities seems to be focussed on textual analysis and algorithms and applications of this.  Visualisation is often coupled to this anaysis – the use of charts and graphs to show the sum total of various statistical variations in the text.  While this work is great, it’s not the only way that data visualisation can be applied to humanities work, as we have shown with the Flickr Commons Explorer and as Mitchell has shown with the Archives Explorer.

Another thing that is only just beginning to be addressed is the importance of the interface. So much cultural information is compiled into databases and then hidden behind web-based interfaces that often limit or at least present the information in a particular way that’s not necessarily conducive to certain kinds of uses.  For example, the search interface reigns supreme, and often this can have the effect of hiding data rather than revealing it.

Museums continue to be very interested in ways of reaching out an engaging with their audiences, but I think many of them underestimate how good some of the work in this area has been.  In my estimation museums and other cultural institutions are among some of the most innovative users of mobile media and social networking – way ahead of government, education and most businesses.

Another big thing that came across for me was the importance of remembering that while Australia has so far pretty much escaped the economic catastrophe that is befalling the US and Europe, over in England the mood is quite different.  Here the talk is of cuts – 25% cuts to the cultural sector, threats to the viability of humanities disciplines, and even entire universities.  On one hand this might mean we can pick up some good scholars from overseas, but on the other hand it is a sobering reminder that economic chaos is generally a bad thing for the cultural sector.

Packing pixels

Over the past few days I’ve been busy packing rectangles into squares.  The packing problem is a well known one, and is an active area of research in computer science.  There’s this project, for example, that’s seeking to document and implement all known packing algorithms.

Packing is an area of interest and active research in computer science not only because it’s a challenging problem, but also because it has a range of practical applications – for working out how arrangements of molecules to actually packing objects in boxes in the most efficient way possible.

My own foray into box packing algorithms comes from a different angle: I want to pack text.  I’ve been working on building a tool for visualising Moodle log data.  A tag cloud is a perfect way to see which students have been accessing the Moodle site the most.  My default tag cloud was the standard one – each tag in the cloud is presented in a sentence of words of various sizes, eg:

tag cloud example text in a default layout


This is pretty much the default tag cloud format, and you see it everywhere.  But then there’s more exciting and aesthetically pleasing examples of tag clouds – none better than Wordle. Wordle’s impressive graphics are even more impressive because they are generative – that is, they are created by an algorithm rather than by manual placement by a human.  The thing is, the Wordle algorithm is not publicly available and there’s no API.  A quick search turned up lots of papers on packing algorithms and a few relating directly to using packing methods to create tag clouds, but all at a fairly abstract level, and few with any example code.

To be honest, I smelled a fun challenge and didn’t want to dig too deep because I wanted to solve the problem myself – even if it was a case of reinventing the wheel.  The problem: given a bunch of 500 rectangles, all of random sizes, how can they be drawn on the screen in such a way to a) use the available screen space effectively, and b) look interesting to the human viewer?

I played with a few ideas before I came to a space partitioning approach.  Essentially, what I decided to do was think of the screen as a rectangular ‘region’ that could contain a number of rectangular child regions.  When a rectangle gets added to the region, two other rectangles are formed like this:

Subdivision example


The top left (light grey) rectangle contains the object we want to place – text, in my case.  The other two rectangles are empty, and can themselves have an object added to them and then be further subdivided.  The idea is that we start off with an initial empty region and add a rectangle, which leaves us with two free regions.  Then when we add another rectangle, we put it one of these two regions, which creates another two free regions, and so on, until the regions get too small to fit any more objects.

topleft pack


Doing this with 500 randomly shaped rectangles creates the image above.  The red rectangles were placed first, the yellow ones in the middle of the process, and the green ones were placed last.

I also experimented with different placements, which required up to five child rectangles.  The first, pictured left in the above image sequence shows a centre pack, where rectangles are placed in the centre of the subdivided region.  The middle one shows a bottom right packing sequence, and the last one shows a random packing sequence, which is a mix of all three methods (top-left, centre, bottom-right).

Out of all of them, the top left method produced the best results (although there I’m still experimenting with other methods).  I’ve also added some extra code to account for rotation, and even auto-rotation to help fitting text (which is buggy at the moment).  When turned into a tag cloud, the results aren’t Wordle, but they’re better than the basic tag cloud:

lorem ipsum tagcloud

And this is how Wordle rendered the same text:

Wordle: ipsum

Not quite Wordle, but then again, I’m not quite Jonathan Feinberg, either.  I’m posting the processing applet here, too, so if anybody out there wants to give me any feedback or suggestions, please let me know.  The class that does most of the work is CRegion.


Bartlett vs Hodgman: battle of the web sites

I’m doing a 5-minute chat with ABC Hobart’s morning show on Monday to compare the web sites of the Tasmanian premier, David Bartlett and the opposition leader Will Hodgman.  After some careful consideration and consultation with some colleagues, I’ve come to the conclusion that if politicians make web sites, they need to be banned from YouTube.

At first glance the web sites look similar, although Hodgman’s is superior in terms of layout and design – mainly because he’s using running a Drupal site using the zen classic template.  Can’t go very far wrong there.  I’m not sure what Bartlett is using, although his site is within a .gov.au domain, so I assume it’s some variation of whatever the department is using for other pages.

Given this, the accessibility and W3C standards compliance results aren’t surprising.  Both were reasonably OK with accessibility (although Bartlett had an image missing a text description – which is a real pain for vision impaired readers who rely on screen reader software).  The W3C standards test revealed 11 errors on Hodgman’s page, 73 errors on Bartlett’s.  So, thanks to Drupal, Hodgman managed to beat Bartlett hands down on standards and accessibility.

Technically, then, Hodgman wins, thanks to his use of Drupal.

In terms of content, things become tricky.  One of the first things I tell people to work out is the purpose of their site – especially if they already think they know the purpose.  A site with a mixed purpose is confusing and will fail to achieve its goals. Here we have two sites that seem to do the same thing, but actually don’t.

Hodgman’s site is about Hodgman.  All the links and feedback and information is about Hodgman and his campaigns.  There’s a little bit of the man here, and a lot of the public façade.  So, if the goal of Hodgman’s site is to promote the individual, then it works.  Bartlett’s site suffers from a mixed personality. At first is seems to be presenting the Premier in the same way as Hodgman’s (ie: this is a site about  a person, a personal home page), but then as you dig in, you find that most of it is about his government – policies, cabinet, portfolios, etc.  The two things don’t work together very well.  He’d be better off with a Premier’s page with a .gov.au URL and a separate Bartlett page that promotes Bartlett.  Instead, it looks like Bartlett is hiding a lack of substance by diverting attention to his government.

So, in terms of content, Hodgman wins again.

This of course all intersects violently with the curious use of social networking by political parties.  Here’s the thing: political parties are starting to hear all about teh Interwebz and get this idea that there’s a group of networked people out there (mainly imagined as the elusive youth vote).  They know the technologies that in part define social networking: things like YouTube, Facebook, MySpace, Twitter, “podcasting”, but they don’t yet understand how these things work at a social level.

Social networking is based on a kind of interpersonal trust.  A user gives as much or as little of his or herself into the social network as he or she feels comfortable with.  The trust runs both ways.  It’s hard to trust someone who seems phoney or insincere.  In the online world, communication is interpersonal, but in the political world, the politician is removed from the public by status and a well designed communication strategy that’s designed to filter information for the politician. There’s good reasons for this.  Even the most committed politician can only do so much, and there’s plenty of people out there who expect to gain a lot more than the politician can deliver.  So, the politician has become a machine – when you send a letter to your local member, you’re sending a message to his or her machine, to be read, filtered and answered by the machine.

Online, this machine doesn’t (yet) work.  It looks clunky and old-fashioned.  On my blog, if you leave a comment you expect it to be answered by me, not by someone pretending to be me, and not by someone who has the authority to answer on my behalf.  When I set up a facebook page, the expectation is that I’m putting myself out there – or at least part of myself.  If someone writes on my wall, they should get a response from me.  If I want my machine on facebook, then I should identify it as my machine, not as me.

Similarly, you need to understand how people use the technologies.  YouTube is a place for interesting videos.  They may be videos of people or animals doing stupid things, or they may be informative or even documentary.  In any event, they need to be interesting.  Videos of politicians talking to audiences is dull.  The only people likely to watch them are the politicians and the people who feature in the videos.  If you want to use YouTube effectively, don’t post videos of yourself in a suit talking to a camcorder held by one of your staffers.  Have someone follow you around with a camcorder and when you trip going up the stairs to the podium, or when you say something embarrassing but politically harmless, post that.  It will make you appear more human and be interesting.  Who didn’t like watching John Howard trip in Perth?  Just make sure you get up and make light of it afterwards.  So, for both pollies, their use of YouTube is a fail.

Both Bartlett and Hodgman have facebook pages.  Bartlett has just over 1,700 friends.  Hodgman has 700 (including the Federal Shadow Treasurer, Joe Hockey). Here’s the surprising thing, though – Bartlett seems to engage with his page much more in the spirit of social networking.  Hodgman less so.  Hodgman’s posts are less numerous and more overtly political.  Bartlett’s are more numerous, often trivial and personal – and while triviality may not be the sort of thing you want in a Premier, it’s certainly the correct way to address Facebook.

So, in the final analysis, I’d award points like this:

Web site technical: David: C-  Will: B+
Web site content: David: D+  Will: B
Use of social networking: David: B+ Will: C
Use of video: David: F Will: F

Overall:

David gets the social networking thing but needs to drop the video or do something more interesting with it.  His web site sucks and is probably not worth the bandwidth. Grade: C

Will: presents a much slicker and more savvy face through his web site, and the content’s much better and on-message, but his use of social networking isn’t as convincing. Grade: B

It’s good to see pollies starting to use the Internet to reach out to their constituents.  They still need to invest more time and energy in the Internet and work on actively developing their online communities, working with the technology rather than trying to impose their way of doing things on social and technological systems they have little control over.

Gabe Newell agrees

It’s always nice when you bag a CEO one day and the next day another industry leader comes out and agrees with you.  Well, kind of.  Gabe Newell, founder of Valve (the makers of Half-Life and the digital distribution platform Steam) was at the DICE summit in Las Vegas talking about the growing importance of digital distribution in the games industry [Game|Life].  Okay, probably not so surprising that one of the key digital distributors is talking up digital distribution, but Newell’s predicting that the next generation of consoles, the role of digital distribution will be incisive.

Newell’s argument is that Valve’s experience with Steam has shown that the value in digital distribution is in the information it provides about sales and user activity.  So, a console that collects this information and uses it to connect publishers with consumers is going to fare well. But what happens when the console or digital distributor is also the publisher? Microsoft and Apple take on this role with XBLA Community Games and App Store respectively.

I can’t see a publisher like Activision or EA going anywhere in a hurry, but the ground is starting to move (again) and companies that are not flexible enough to ride the shift are going to crumble.  Short to medium term, then, we might see publishers going into contracts with digital distributors to get their games out and provide metrics in return.  The big publishers are still the only entities with enough cash to fund AAA game development.  Longer term, if a digital distributor is big enough, if digital distribution becomes a primary way to purchase games, and if digital distributors have enough relationships with developers, then for all intents and purposes, they become publishers.

EA talking like a l33t troll

Over at Wired Game|Life they’ve covered EA’s response to the global economic crisis.  In a nutshell: the GEC is good for the game industry because it will destroy all the small companies that produce lots of crap titles for consoles.  Some would say it’s EA’s seemingly infinite capacity to extend franchises beyond all reason (eg: the Sims + various expansions, numerous EA Sports titles, etc.) that spawns most of the crap of game store shelves.  Other might argue that EA’s behavior tends towards the monopolostic and their reluctance to try anything new when they can make a quick buck from putting out another Sims expansion pack is retarding industry creativity.  Still others would argue that the game publishing model (of which EA is a sterling example) that makes it all but impossible for small or medium sized developers to get a fresh title funded is more to blame for the crap on the shelves.

It’s perhaps a little ironic that it’s small studios who may be well placed to reap the greatest rewards from the GEC.  If digital distribution turns out to be the new game publishing and distribution method of the next decade, it’s today’s small developers who’ll be tomorrow’s major labels, and they won’t be interested in doing publishing deals with EA.  Publishers like EA may find themselves whining about the evils of online distribution in much the same way that the once arrogantly powerful record publishing industry was.  I’m sure that small startups riding high on digital distribution and staffed by the talented people left in a ditch by EA’s kinicide will weep real tears.

Why so much venom towards EA?  Well, as companies go under and real people lose their jobs through no fault of their own, fat cats like EA’s CEO John Riccitiello stand in front of slides proclaiming that the way to stay in business (nay, PROFIT) in a recession is to cut, cut, cut.  Gee, I wonder if Riccitiello would be so gleeful if his salary and share options were cut each time he cut one of the EA “family”. I’m guessing not, but then he’s got nothing to worry about.  A certain amount of humility would go a long way, Mr Riccitiello.

Australian Games and the GEC

So what does the Global Economic Crisis mean for the Australian game industry?  We probably need to answer this question by looking at the industry itself.

The Australian games industry is an interesting beast.  It is made up of some 70 developers ranging in size from around 200 employees, right down to one or two person operations.  All Australian developers are privately owned, which means their
financial data is not in the public domain, since they don’t have
shareholders to report to.  This makes it pretty difficult to get a clear idea about the exact shape and size of the industry.  From a report by the Australian Bureau of Statistics, who surveyed 45 Australian game companies, we can pretty confidently say that most employment, game production and income is generated by the top ten or so game companies. Many companies, including those not in the survey (only companies with Australian Business Numbers were surveyed) are quite small and focus on contracting to larger companies, doing ports or doing mobile and casual game development. 

Many Australian studios are independent, which means they are not beholden to any single publisher.  This gives them a great deal of flexibility in deciding what they will work on, but it also means greater uncertainty, as they need to compete with others developers for the rights to make a game.  Other studios which are owned by larger developers or publishers have more certainty, but less autonomy.  While the studio itself is run day-to-day like an independent studio, budgets, game features and development timetables are determined by the parent company.  The exact nature of this relationship is complex, and the amount of autonomy the local studio has differs from studio to studio.

Both independent and owned studios have challenges they need to face in the current economic climate.  Perhaps the most exposed are the owned studios – like Pandemic.  If a studio is owned by a larger publisher, then the fortunes of the development studio is linked to the fortunes of the parent company.  As the economic crisis hits the game industry the larger publishers are engaging in belt-tightening, which includes layoffs.  For a company like Electronic Arts, which employs thousands of people in an array of studios worldwide, this means cutting the worst performing studios free.  From what I have read in the Australian game media, Pandemic is an exemplary case of how ownership by a large publisher can be disastrous for Australian companies.  A failure to deliver a major title (a game based upon the latest Bat Man movie, The Black Knight) coupled with geographic remoteness and economic turmoil means that Pandemic became a ripe target for EA closures.  To EA’s credit, they appear to be allowing the studio to go it alone, with their assets (eg: computers, dev kits) and a new game that they’re currently working on.  With any luck a new independent studio will arise from the ashes of Pandemic Brisbane.  If it were me, I’d name the new studio “Phoenix”.

If Pandemic Brisbane provides a clear example of the risks for owned developers, then the risks for independent developers are less clear.  Many – if not most – Australian independents are in the small game/casual games market or mostly work on ports, which tend to be referred to as SKUs in the industry (Stock Keeping Units).  So, if an original game is released for the PC, a publisher may employ other companies to do versions of the same game for XBox, Playstation, Wii, etc.  Star Wars: Force Unleashed came out with a number of SKUs (Stock Keeping Units – essentially a unique product; the industry typically refers to ports or different versions of the same game as SKUs – different, yet the same), one of which (the Wii version) was developed by Krome Studios.

A clear danger for Australian independents is that this work may dry up if the costs and perceived risks of outsourcing SKU work to Australia scare off publishers.  To date the financial crisis has seen the value of the Australian dollar fall next to the US dollar.  This makes Australia a more attractive development environment for US publishers because it lowers development costs.  The problem is that by hitching their wagon to US publishers with the value of the dollar as the link, Australian developers become dependant upon the exchange rate – and when it changes the industry will flounder.

With these dangers come opportunities.  When I get a chance, I’ll post about them, too.

 

Oz game industry takes its first GEC casualty

According to various sources, Pandemic Studios in Brisbane has struck the economic iceberg, with the apparent loss of all hands.  It becomes the first casualty in the Australian industry of the global economic crisis.  Pandemic was originally an independent developer with two offices – one in Brisbane, the other in Los Angeles.  They have put out a number of decent titles (such as the well received Destroy All Humans) and represent one of Australia’s top game companies.  In July last year Pandemic merged with Electronic Arts in a deal which, to quote Gollum, “tricksy”.

Pandemic’s impending demise – if the rumours are to be believed (and they seem like they are) – is a worrying development for the Australian game industry.  It may presage further closures in the future.  There seems to be two schools of thought regarding the relationship between games and the economy.  Some argue that games, like films in the Great Depression, will survive and perhaps even thrive as people turn to entertainment as an escape from the less than fabulous reality of a poor economic climate.  This perspective does seem to be supported by the stats – sales growth for games and consoles is still in the double digits, and where retaillers in most sectors are seeing sharp declines in profitability, some games retailers are recording increased sales – up to 22% in one case.

The other perspective is the pessimistic one.  Game development is highly reliant upon investment money, and while returns for the top games can be fantastic, picking a winner can be hard.  Put simply, games are a is a high risk investment.  In an economically conservative environment, sources of investment capital for game development are going to dry up, which in practical terms translates to fewer big game projects, more project cancellations, and this translates to job losses and perhaps studio closures.  For the biggest companies like Electronic Arts and Activision, who can finance their own game development, it means pulling back to safer core business and maintaining profit margins.  When growth through development is harder to come by, maintaining profits is a matter of saving money – or, as EA’s corporate communcations manager, Mariam Saghayer puts it, though a “cost reduction initiative that will impact facilities and headcount”.  While that makes it sound like Mariam works for a beef exporter, Mariam does in fact work for EA, and despite her corp-talk, she means closures of “studios” and sacking of “people”.

Most likely both of these perspectives will co-exist.  Game companies can be selling lots of titles and increasing profits while there is an overall contraction in the industry as a whole.  This will possibly translate to less innovation in the mainstream game market and a slowing of the development cycle.  In other words, there won’t be a next-gen console for some time to come – it will be Wii, PS3 and XBox360 at least until the economy becomes stable enough for companies like Sony (who recently took a $1.1billion hiding) to feel comfortable taking risks. So, expect to see the big companies do more of the same and milk tried and true strategies for everything they’re worth.  The Sims 3 will go on and on and on.

What this means for the Australian industry is an interesting question, which I’ll leave for another post.

European Flights in 3D

Last year I put together a visualisation of some aircraft flights that I constructed by scraping Qantas timetables from their web site and interpolating the flight data between airports.

The biggest weakness with this was the lack of detail.  Really, all I had was a start and end point for the flight, so I was just drawing lines assuming a straight line and a constant velocity.  More recently I was trawling through some posts on an astronomy forum and someone was asking if there was some way they could determine what flight they were looking at whenever they saw a plane pass over (aircraft at night can sometimes look very odd).  Some people pointed out that airports have FIDS services that provide information about aircraft departures and arrivals.  That’s not much better than scraping the Qantas site for my purposes, but it got me thinking – are there any other datasets for flights online?

The most famous example of a aircraft flight visualisation that I know of is Aaron Koblin’s Flight Patterns piece.  He used a combination of processing and Maya to render the lines and shapes that are drawn by the ghosts of aircraft as they make their way across the continental United States.  For his piece Koblin used data from the US Federal Aviation Authority.  I have no idea how he got the data he used.  Certainly, in the post-911 age of where terrorism is a constant source of anxiety for some, looking to access things like flight data is going to raise the wrong sorts of eyebrows.

Nonetheless, I did find something: OpenATC.  OpenATC provides flight data for airlines based on information that enthusiasts can collect with a radar dish and a $1000 device called an SDS-1.  These can be used to grab flight data reported by aircraft as they pass near a person with an SDS-1 set up.  At the time of writing, there’s pretty good coverage for Europe and some coverage in NSW in Australia, but very little elsewhere.

Inspired by Koblin’s work, I did a 3D version in processing.  You can see the results here: http://noobtronic.blip.tv/

Xbox live community games

The following are notes I made during one of the sessions at Game Connect Asia Pacific 2008.

MS game development for PC Xbox and Zune development. Can develop with only a PC. You use XNA GS 3.0. XNA is a framework for building games. Visual Studio (also free version) used, code in C#.  You can make games using free versions of all the required software downloadable from MS.

http://www.xnagamefest.com

We’re getting a live demo of building games in Visual Studio. They have templates that let you just build a game.  Select template, compile -> game works.  Good way to learn how to make games because you can hack the template games.  All assets are included, too.

The basic kit is free but you can only build games for Windows. You can buy premium membership and develop for XBox360. You can easily transfer builds from your computer to your XBox over a network cable.  Students can get free Xbox-capable kits to test, but can’t publish. Can’t publish from Australia yet either. MS needs to chat with the Classification Board, but are expecting Australia to come online by the end of the year.

Lots of education materials on http://creators.xna.com.

Developers of live community games get a 70/30 split in profits in the developer’s favor.

To submit a game to live community you need premium membership.  Not sure how much this costs. When you submit a game you need to self-rate using a form as part of the upload process. You then add a bunch of other data and upload the binary. This can be 150 megs max. You also select a microsoft points price (MS points are like credit on XBox, you can purchase points with a credit card or a store card like an iTunes card).

This raises all kinds of issues about user created content. What about copyright materials in the game – especially if they contain music and graphics? Microsoft says they bear no responsibility – you’re on your own if you upload illegal or prohibited content.  A lot of people don’t understand copyright in my experience – I’m expecting this to be a learning experience for many.

When you submit it has to go through a peer review process. This process is about content and legal issues, not about the value if the game. There’s a list of prohibited content and it’s meant to be worldwide, so things in the prohibited list reflect the most stringent standards of the most restricted participating country.  Part of this process is identifying copyright materials as well as illegal materials.  MS will inform authorities is you try to upload highly illegal stuff.

Games need to be reviewed by eight people. This is currently taking around 10 hours to complete. Games are then quarantined for 48 hours before becoming available to the world.  You can release timed trials too, so people can play your game for a short time for free.

Microsoft won’t own IP. You are free to republish your games in other formats.

Gcap 08 Friday keynote

Jay Wilbur – Epic. Jay used to be with id software. Jay is speaking about looking at Unreal Engine 3 as middleware.

Showed a demo of Nurien’s Mstar from Korea as an example of an Unreal based game that does not look like a typical Unreal engine game. Game is part of the whole Korean online community thing which is huge.

Jay is really selling the engine, making the point that it does not really make sense to build engine tech from bottom up each time. Game devs can license middleware for use in multiple titles, build libraries of content and realise efficiencies.

Content is the most expensive part of any next generation game. Jay says that this makes it even more vital that companies expend less energy on the engine tech. He says that with licensed tech, content development can start immediately.

Middleware is ‘battle tested’ which makes it more robust, more optimised and more useable because it is well supported by tools.

Middleware is not just the engine. Components can be used in isolation to the engine. For example, physics engines, audio engine.

Jay is now discouraging people from modifying the game engine. He says if you use it out of the box, it’s cheaper , simpler, less risky. Something called ‘support outlets’ which allow developers to share fixes and features. A kind of closed open source thing. In fact, they use an internal mailing list system for support – when a licensee sends an email in to support, all Unreal team people get it. Someone who knows then answers it. Clever. Avoids horrible helpdesk syndrome without exposing tech staff to hammering.

Using the Unreal tools for developing content are recommended – discouraged from using other software. Assets may be created elsewhere, but should be brought into tools early so the tools can produce final assets that are optimized for the engine. In fact, they say to optimize content before code. Overly complex content is major source of performance issues.

Rowan Wyborn from 2K Australia now talking about their experience with Unreal on Tribes:Vengeance, Swat4, Bioshock and a currently unannounced title.

Team is split into core tech (5) and game code (10) teams.

Fast to get games up and running, supports multiplayer and multiplayform out of the box.

On Bioshock 2K did a lot of custom feature creation. This made it difficult to merge their codebase with updates from Unreal. Rowan says this was a mistake and led to a lot of additional work. Now they keep their codebase in sync and use OO techniques and #defines. Makes life complex for coders, but allows them to keep in sync with Unreal codebase.

The interesting thing is that this does not mean you don’t need a solid technology team. What is gained seems to be the ability to maintain traditional sized tech teams on next ten platforms.

Back to Jay.

Unreal Engine being used in movies – a multitrack cinematic linear editor built in. Lazy Town uses unreal for Cg. A number if others are starting to make linear stuff using Unreal. Even Dreamworks have looked at UE for previz.

Again, the whole convergence thing.

Next Page »