Jun 7, 2013
1) Make up a few products with several names, descriptions, and images.
2) Buy Google Adwords on each product and variation.
3) Track which product gets the most clicks, and start selling that item.
2010 - Kickstarter Method
1) Design and manufacture a product, and prepare your business model for producing at scale.
2) Create a Kickstarter campaign to raise money to scale up production, and reward "backers" with lower-priced "pre-orders."
3) Ask all journalists that write blogs on the success of your Kickstarter campaign to link to your store site.
2013 - Reddit Method
1) Build a product.
2) Post a pic to Reddit that either you or "your friend" made it themselves on a weekend for the hell of it.
3) Launch your Etsy store two weeks later.
Dec 28, 2012
Director Peter Jackson already broke some solid ground with everything about the Lord Of The Rings, and came out very successful in the end. After already converting over to working on a purely digital format (The Lovely Bones was his first digitally-acquired feature), he quickly moved onto 3D- a process a lot of great filmmakers quickly adopted. Audiences, however, have been slow to praise 3D because the picture is eye-straining and the premium on ticket prices make it difficult to believe the experience is that much better. But ticket prices be damned, the theory that a higher framerate in projection can reduce the headache. So after countless press from director James Cameron that 48fps would solve all problems, Jackson was the first feature director to stick his neck out to this new paradigm.
My friend Renn Brown wrote a fantastic article on CHUD about how all of the criticisms and debate of the new format are a fallacy- no one has said all features should be shot this way, and in fact Jackson even said this is a "tool" to tell the story (rather, it's only the brush and not the whole "canvas," as Brown puts it). Digital projectors are able to show a wide range of framerates, so it's perfectly possible for a feature to be 24fps, become 48fps for a particular sequence, and then revert back.
So how does the Hobbit come across in 48fps 3D for a full three hours? In my opinion: not great. And I was bummed about that. I absolutely love new, experimental processes, and that the medium is what we make it. I've never minded 3D (over time, I've concluded it isn't worth the additional 50% premium in price for the ticket). I once had a conversation with a director I worked with on a 16mm short film that had a 60p television sequence that we were both getting a little tired of watching 24fps all the time (this comes after the 24p evolution and HDSLR revolution has made nearly all video we see on television and online 24fps). I first saw the film first in 24fps 2D, and loved it. So when I saw it in 48fps 3D, I was heartbroken how disappointing it was.
I don't want to call it the "disaster" it feels like everyone is describing it, but the fact is I was conscious about it the whole time. And as an audience member, if I don't get lost in the story and characters and only think about how everything looks sped up and cartoony, that's a big problem. Some sequences in the new framerate I thought were fantastic- contrary to Vincent Laforet (whose long article inspired me to write out my opinion), I thought the whole Gollum scene was beautiful. That was one of the few sequences I thought the 3D looked great and the high framerate was not a problem at all. But that was the very first sequence they shot (so I wonder if the crew worked harder on that sequence during production to get it perfect than they did on the rest of the film), and the movements of the camera and characters weren't as grand as other sequences.
What I'm very curious about is what will happen to the next two films in the series- production has ended and is in the can with this 48fps style. As we see, there can be 24fps 2D and 3D extracted from the material (which, as far as 2D, I think looks great), so will the ratio of HFR releases be reduced to just a handful of theaters? Or, to put it more accurately, just be through a handful of showtimes on the same projectors that screen it the other ways for the rest of the day? Will James Cameron back down on his own loud, public stance that this is the future and we just need to get used to it? I highly doubt it.
Luke Letellier prepared an example of what this looks like by applying a frame blending effect to the Hobbit trailer, making the 24fps version into 48fps. If you can ignore the artifacts of the filter (particularly in quick movements, edges have an odd "stretching" quality), the video is pretty accurate.
Dec 5, 2012
Mom fishing, circa 1956.
A few weeks ago, my grandfather passed away, so my family has been working on handling his estate and personal items, which of course includes a lifetime of photographs. For the memorial, my mother handed me a shoebox of some prints and slides she came across that she wanted me to scan and project on the wall during the ceremony. The family thought I did so well with this that they handed me all of the remaining slides they've found, and asked me to scan and share them all.
This is something that's interested me for some time (preservation and restoration are subjects I've covered here before), and it's something I always thought I'd pursue as a long-time hobby- collecting and restoring old photographs. Of course, this is one such service photography stores around the country are continuing to offer, which is excellent for most people, but I wanted to get a good understanding of what exactly it would take to get a good digital file out of these old keepsakes.
I'd want the image to be as high-quality as possible, so I can essentially just archive it safely and store it somewhere without wanting to re-scan the image, as well as be able to treat the scanned file in the same way I treat my digital photos. The way I treat my digital photos, of course, is by saving them all as Adobe Digital Negatives (DNG's), which I can play around with without degrading their quality ("lossless"), which not only includes changing colors, but changing exposure levels and performing lens adjustments after the fact. I know I may not be able to do quite as much modification as my digital stills, but if we spent so many decades making digital photography parallel film, than I should be able to make my film parallel my digital workflow.
I'm still only into the first hundred slides because the process is tedious and my system is buggy, so I have to take care of things one at a time. But I want to say right off the bat that Kodachrome deserves as much praise as it's ever been given- some of these slides are over 60 years old and the color retention is absolutely incredible.
Kids watching TV, May 1961. Kodachrome, unrestored
This picture of my aunt and uncles is over 50 years old, and the color representation looks flawless. And I doubt this slide was kept in any better care than the shot below, taken on Ektachrome.
Lunch on the mountain, circa 1956. Ektachrome, unrestored
The green layer of emulsion is almost completely disintegrated, leaving the red and blue layers behind (I thought blue would be the first to go because it's on the "weaker" end of the visual spectrum), to the point that it's almost beyond restoration.
Lunch on the mountain, circa 1956. Ektachrome, restored
This was a quick pass at just modifying the colors in the scanned sample, beyond this point I'd have to add color into the picture to bring it back to normal.
From what I see as far as the equipment goes, there's not much available that's affordable and easy. I'm using a flatbed scanner with a light adapter built in (a Epson Perfection 4490 that was previously my other grandfather's, no doubt top-of-the-line at the time he purchased it for the exact same endeavor). You could buy this unit for a couple of hundred bucks, but a dedicated film/slide scanner runs in the thousands. Fundamentally I'm not quite sure what would be the difference- controlled light blasts through the film onto the sensor. The difference with the flatbed is you've got a giant pane of glass that can't possibly be in good condition if you use it to regularly scan papers or solid objects- any scratch in the glass reveals itself in your scanned image. Additionally, placing a cardboard-bound slide introduces spacial distance between the film and the glass surface to be scanned, so there's doubt whether the scanner is properly "focused" on the slide. And there's only one light source that I can't figure out how to calibrate, nor do I even know if it's truly "white" light and I'm getting a good representation of all three color channels.
On the plus side, the scanner does include an infrared light in addition to the regular light. Infrared light completely passes through film emulsion, meaning if dust or specs are on the filmstrip, they will be "seen" by the infrared light, and the emulsion will pass through- this allows us to know the difference between grain or specs that are supposed to be a part of the image.
Now in terms of the actual files being created, I was disappointed to find that although I can directly scan straight into Photoshop, I'm not able to control the settings of the scan as much as I'd like- I'd have to bring it in as a jpeg, tiff, or pdf file. However, I'm not alone in this disappointment, and some people have created third-party software that allows you more control of your scanner. With Vuescan, I'm able to
1) Create a DNG file (technically a DNG wrapper to a TIFF file),
2) Utilize the infrared channel.
Additionally, Vuescan is able to do multiple scans of your pictures at varying exposures, so you'd be able to create an image with HDR-like range- change the exposure after the fact, or at least retain the image detail that's lost in the highlights or shadows. Unfortunately this feature doesn't come without a bug, which I'm not sure where the error lies, in that the multiple exposures aren't perfectly aligned, so the entire picture comes out blurry.
Girl on carnival horse, circa 1956. Kodachrome, unrestored
Researching the matter further, scanning via flatbed or dedicated film scanner is also not ideal. True experts perform a process called "wet-gate scanning," which is soaking the film in a chemical solution as the film is scanned. This utilizes light refraction to fill-in small scratches to the plastic, as well as makes the scan appear sharper and more vibrant. This is standard for medium-format photography and feature-film restoration, but no machine is available in the consumer or even pro-sumer level that does this. And it's not even possible for cardboard-bound slides- you'd have to remove the cardboard casing around each picture, and then after the scan you'd be left with an individual frame that you'd either have to re-case or store loosely in a slide holder.
I did come across one process a lot of hobbyists have done to handle the bulk of limitations of scanning, and that's by creating a special casing in front of their DSLR camera lens to hold the slide or negative and take a digital still shot of the film. This way they'll have a camera raw file of the picture at a low file size (each of my scans are coming in at around 250MB a piece, as opposed to the 20MB my Canon Rebel T3i creates), and be able to go through pics much faster than the flatbed process (and much, much, MUCH cheaper).
As you can see, I'm not the only one with this problem, and the solution is a lot more complex, in my opinion, than what's being offered on the market. Truth be told, I haven't checked in with my local camera shop to see if they're offering solutions that affordably meet the challenges I'm facing. Maybe the tech is sure to evolve to this point, and there's simply been a lack of market demand. I'm sure this is going to turn around very quickly as most of the world will start finding themselves coming across this exact problem- billions of photographs that aren't as accessible or secure as the rest of our library. Especially as the last generation of film-shooters are going to leave behind lifetimes of memories for the next generation to sort through.
Gunning Grandpa, circa 1951. Kodachrome, unrestored
Sep 17, 2012
Sep 11, 2012
Once again, Stu Maschwitz wrote an excellent blog about something I'm sure is way more common than the industry would like to admit: production sound has not evolved the way motion picture has.
He argues that setting up his sound devices is still a mystery, and post-production fixing isn't any easier. He points out how the iPhone is a perfectly capable tool for assisting in sound production- every actor and crew member he knows has one, and its interface can so clearly tell you whether you've set things up right or not. I agree with everything he says, and it appears a lot of others agree in the comments.
Digital sound was ahead of digital picture by more than a decade in major motion pictures (by my calculation, Dick Tracy (1990) was the first film to have "a completely digital sound track," whereas Star Wars II: Attack Of The Clones (2002) was the first major motion picture released that tried to emulate regular film). And digital sound was adopted into theatres very quickly because, compared to picture, sound files are smaller file sizes, and difficult to discern in quality past a simple threshold (ie how many people can tell the difference between a 2MB compressed MP3 and a 20MB uncompressed WAV file?)
Jump forward a decade, when fewer movies are being captured on film in exchange for the newest flavor of digital cameras. DSLR's capable of capturing 1080p HD video enter the hands of the public by the millions; yes, they are very flawed, but compare the quality coming out of them and the quality coming out of the typical $2,000 camera 10 years ago, and there's no contest. Sound may have had its shit together, and was as good as how you treated it, but I'm surprised it's essentially the same.
When I was in school, High Dynamic Range Images were getting really popular and prevalent, even though I thought they looked like mud. However because microphones are traditionally mono, and you can cheaply purchase a cable splitter, I wondered if it would make sense to set one input to, say, -3dB and the other to -12dB and be able to combine them to make HDR Audio. This way a person's vocals would not peak if they were shouting, nor would we lose them if they spoke softly. I still don't know if this is possible, nor have I heard any system capable of working with files like this.
Stu mentions a few things he'd like to see, like a visual display of if the mic's been placed correctly. I've heard of "sweet spots" in rooms, where the shape and material of the room and placement of speakers render a single spot of perfect sound creation. JBL's MS-2 seems to do something like this to figure out how to best set its playback. And then there's noise-canceling headphones that play back sound waves that counter-balance what it hears outside. Can live mics do this? When we record "room tone," can the mics simply counter-balance this, so they'll only pick up our character's dialog? Or do this in post? Has Bluetooth evolved to be a viable mic? Can lav transmitters locally record and overwrite the distortions when we're having to deal with frequency static? When I have two interview subjects, mic'ed separately, can I tell one mic to cancel out the other so I don't get a soft echo from each person? The list goes on and on.
And maybe these tools do exist, but why aren't I seeing them at my pro video retailer? Stu's got a point. Sound doesn't seem to have been on the innovation highway imaging's been on for the past decade. Or at least it certainly isn't any easier to understand.
Sep 6, 2012
For a long while I've been meaning to write a longer post about my current video pipeline, mostly to communicate from others in the same situation. In particular I wanted to start with why I've been transitioning away fron Apple's Final Cut Pro and back into Adobe Premiere (where I first learned editing in high school), but a new program Adobe's launching called Adobe Anywhere made me particularly excited, as it directly addresses the pipeline I've been having to work with for the past two years.
Quick history, I learned video editing on Adobe Premiere in high school because I had a PC. Although it was able to edit well enough, the video card was terrible. So I finished a project and had to "print to tape," I'd be laying hands and praying that the card wouldn't drop frames on playback, screw up the whole export, and I'd have to start over. Before affordable DVD burning (and definitely before YouTube), working linearly was miserable*.
*(For those who care [and none of you should], my father had a Hi8 tape deck at his workplace that was literally like reel-to-reel editing, which I practiced a bit).
I took an Adobe After Effects class in community college a few years later, and loved the Socratic feeling that the more I learned about the capabilities of the program, the more I realized I was still just scratching the surface. I still think I only have a running knowledge, especially when I can see the source project file of some animations.
Later in college, we were trained on AVID, but because the Nitris and Symphony systems were so expensive, we had to reserve our time slots, and Final Cut Pro was available on all of the regular computers. When I graduated and starting working professionally, FCP was the choice because it was very affordable and available, and everyone else I worked with used FCP. Skip several years, several jobs and companies later, and my traditional pipeline is working with a team of two to seven photographers and editors, everyone on FCP, and all exporting out files for streaming (with the rare occasion of exporting to DVD). I don't think I've ever exported to tape outside of college, though I always entertained the thought when figuring out backup/archive solutions.
Now if the job wasn't an individual contracted project (in which case I was the sole photographer/editor/motion graphics designer), the work was done in-house, ie all editors sat next to each other in the same building. When my partner and I started Looking Glass Children's Videos, we a) didn't have the money to purchase and use editing equipment for staff, b) weren't even sure we'd ever want to have a central post studio. All of the editors I knew and hired had their own equipment they were happy to work with and wanted to edit from home, and especially with the nature of Looking Glass' videos, there was no need to produce from one location (I designed our pipeline specifically for photographers and editors to work from around the country, in case we wanted special videos we couldn't shoot in southern California).
Again, at the time, all of our editors used Final Cut Pro, so our pipeline was for me to source all footage (shot via DSLR's) to the editors via hard drives or FTP if there wasn't much footage, and when they completed a cut they'd email me the FCP project file (only a few MB's), which I'd be able to reconnect to the source ProRes files and apply final edits and color correction before exporting out to stream to our subscribers' iOS devices.
Unfortunately I big hiccup in the process was bringing FCP projects to After Effects for cleanup (mostly frame smoothing/warp stabilizing, but some basic effects were applied). I used Popcorn Island's free script to bring FCP timelines to AE without having to export out- this way I wasn't creating duplicate files, saving hard drive space and complication, and dealing with the source video files. It was especially difficult if I worked on the piece in AE and wanted to bring the cut back to FCP- it was impossible without manually re-editing the piece.
Last year, Apple released the long, long-awaited FCPX that promised a lot of updates editors were asking for. The program was not only re-coded to utilize the full power of modern Macs, but completely rewrote the fundamentals of how professional editors work. This was a very big deal to very few people, but of course adoption of FCPX to a wider audience was giant, so there's no looking back for Apple. I purchased and used it during Looking Glass days, but it was clear that this new version would be even worse for sharing projects to multiple editors and collaboratively editing, so I had to get a refund and continue with FCP7. A lot of pro editors started to turn back to Adobe Premiere, so over this last year, I've been doing the same.
Some of the biggest reasons for this I still love After Effects and Photoshop more than ever. Some of the other programs I've used and liked, like Lightroom for photos or Audition for sound editing, but having to give up AE and PS would be sad days for me. All of the major video editing programs are essentially the same in terms of workflow, it's just the details the differentiate them. Meaning it's not difficult to do some basic editing with Premiere (just as it wasn't difficult to learn and use AVID in school). So this was never a light-your-baby-on-fire debate: just as it takes two bottles of beer to enjoy the taste, it takes only a few weeks to get your stride with an editor.
A few things I'm still getting my head around, and I may or may not come around just through familiarity.
- I still convert all DSLR footage to ProRes. Adobe Prelude is an interesting program, but it's still kinda awkward to me. I've heard from a few sources that it's best to convert the limited h.264 codec to something more robust, like ProRes or Cineon, and along with the naming convention I like to use, I don't mind converting. I also dabbled with the AVCHD codec, so making everything consistant with ProRes doesn't feel like a bad idea.
- Color timing options are overwhelming. There's the filter options on the timeline, like 3-Wheel Color Corrector (which I'm not feeling is as powerful as FCP's), but then there's Red Giant's Colorista, which is difficult to import to After Effects (though Colorista runs in both), and in After Effects there's Color Finesse, which doesn't backtrack to Premiere.
- Premiere's Title Window is balls. I can understand making something comprehensive, and I appreciate doing as much as it does in the timeline (instead of bringing it to AE just for some basic titles), but it really feels like trying to draw with your left hand, in terms of placement, layering, fonts and sizes, etc. Sometimes I really miss FCP bare-bones titler in just throwing in some placeholder titles, or just having to make single-word changes (that don't require duplicating and saving a separate asset).
- Sound editing also has a learning curve, especially if you want to keep everything in the timeline, and especially with DSLR shooting. DSLR's allow 2 mic inputs on separate channels, but it's really difficult to mix those back to stereo, or move backward on timeline-based effects.
- I love that most filters in AE work in Premiere, like throwing Red Giant's Denoiser and Unsharp Mask onto DSLR footage in Premiere and it works in AE. But I miss having a smoothcam filter to plug in directly on the timeline (just to smooth pans and dollys), and it'd be nice to have something like AE's wonderfully powerful Warp Stabilizer available (even if it were simplified).
Again, a lot of it just comes with getting used to the territory, but ultimately it'll be in the right direction. It's also really great to use if you do have a large group of remote editors, because licensing just one version of Premiere is much, much more affordable than purchasing the license for the Final Cut Suite. And again, with Adobe's new Adobe Anywhere, a lot of the problems or slowdowns I encountered before are gone.
Now I just need to start exploring the world of Ray Tracing in AE, and practice more with Mocha.