The scientific mindset jumps straight to the implementation of HDDV-camera technology breakthroughs...corroborating and proliferating rapid lowcost high quality channel throughput |
The art of screenwriting, feature movies, cinerama theatric, home theater, television, sitcoms, dramas, movies of the week, blockbusters, superblockbusters, festival shorts, -now new interactive-DVD games, and, computer-based live-action-staged virtual-reality multi-player team-games (cf Holodeck);- its art, technology, science, form, ... rapidly clarifying and developing as faster digital computers and Internet and higher definition cameras and projectors bring collaboration to instantaneously bear on all phases, from public interest to idea to concept to research to treatment to plot to story to script to review to pitch, agents, options, sales, to producer to market analysis to budget to investors to coproduction to storyboard to production line script to schedule to directors, casts, crews, to logistics, locations, settings, sets, properties, to cameras, units, blocking, shooting, to dailies (now "seconds") to editing (and "mixing"), cgi computer-generated imagery, sound, music, rights, synchronization, to mastering to copies to promotions, trailers, to distribution to theaters, syndicates, broadcasters, to previews to first runs to commercial debuts to investor returns, public opinion, world-wide release, director versions, DVD rentals, sales; ... is full cycle and a web-ful of information:
Discussion of the new generation all-digital movie technology:
(HDD-MAX sometimes designated Super-HDDV: full-eye-width 180°x135° large screen format; 16000×12000 = 192Mpx)
Celluloid film has random grain, better approximating the visual response (finer resolution pixels at slower rates) while HDDV being hardware has fixed pixel-placements, even if the same overall density! The result is, the digital HDDV lacks the inter-pixel estimation needed at the Nyquist frequency: i.e. at max-freq. sine-wave looks like a square wave, but its cosine looks washed-out (literally gray wash-out: pixel-to-pixel averaged bland gray).
NB. Interpixel estimation is still 1-pixel wide, but by reducing both sine and cosine to equal-contrast sesquipixels, each half-washed, halves the "wow" blur-oscillation at slow image-motion,-- eliminating pixel-jerky 'dove-headed' moon-jogging (where the otherwise-slow-moving moon jerks very-noticeably)...
CAMERA RULE #1. PERPETUAL MOTION:
Just slightly: It needs 0.5 pixel/frame-shuttered to take-up the inter-pixel estimation, and that's (0.5 pixel/1080 hor.line)*(30/sec) = 1 frame.height/72 sec, virtually unnoticeable -- or (0.5 pixel/1920 vert.line)*(30/sec) = 1 frame.width/128 sec, also virtually imperceptible. And it can always adjust the direction back down;--
CAMERA RULE #2. ON THE DIAGONAL SLOPE:
Perpetual motion either HDDV-diagonal slope of ±29.4 ° is about best-strategy for broadening utilizable inter-pixel estimation interleaving over various glide rates ... Better would be 24.3° for square pixels as that allows the horizontal the most speed variability 0.45 to 1.6, while maintaining the inter-pixel distance at least 0.5 of horizontal or vertical,-- but that's fine tuning on a course range, --and as speed increases, pixel fill is less noticed;--.
CAMERA RULE #3. FIGURE INFINITY [LEFT-SIDEWAYS-8]:
The overall busy optimal perpetual motion strategy, is the inphase second Lissajous: The image is thus always scanning horizontally or vertically, nearer level,- while filling-in the intersticial pixels continually ... (descending shallow might be preferred, like reading);--
CAMERA RULE #3.5/#4 ... OR FOLLOW A SUBJECT LINE:
Off-horizontal, a crossing street, a crossing roof line, a crossing shirt pattern, a gliding bird, ...
Off-vertical, a tapering column (not by parallax), a leaning ladder, a palm tree ...
RELATED FUTURE ITEMS:
The Bayer pattern RGBG pickup gives double resolution for green, -which is also primary luminance,- and gets at the first-order higher resolution, in motion, by the green pixels being small: being interspersed with red and blue pixels.
Likewise a Bayer pattern RGBG display effects the higher resolution, for motion.
Technologically, inter-pixel wash-out may be reduceable in white using the 2-D-adjacent colors, for better display resolution.
"Moonwalking WOW" can be computationally reduced by pre-loading adjacent pixels to about 60% (requiring a 2× higher "nyquist frequency-resolution" camera; See Sesquipixels), but which thereby reduces maximum resolution similarly (where that is requisite). But, Displayed resolution can be re-improved on the receiving end by line-doubling computing inter-line sub-pixels (I've suggested treble-thread gemming) allowing larger screens without jagged stairstepping:--
Total detail-of-interest resolution is fairly met by 1920×1080, but visual accuity (sharpness) is highly desireable.
[less resolution at busier color, etc. ... visual bandwidth]
This saves takes, retakes, edits, actor mindchop, Director brainstorms ... (giving it all to the Editor, in a soundproof room).
And by then the distance covered requires a new floor or track set-up anyway....
As the camera can shoot wide-angle, other cameras must be placed either behind visual blinds, corners, obstructions, or angles greater than about 30 degrees (A 1920×1080 camera takes-in 32° at 1 arcmin. or about half that, 15° at sharpest ocular resolution, front row seating). Be watchful of mirror reflections in windows, eyeglasses, computer screens, dress medals, even shiny white dry-erase boards, wristwatches, ... -more often trouble for the lighting.
CAMERA RULE #5. ODD NUMBER OF CAMERAS IN A CIRCLE:
Generally, cameras circling an object can better avoid sighting each other if they sit in odd-sectors (the wide-angle between any two remains constant for the third moving on the circumference), an odd number of cameras; E.g. An isosceles triangular array of cameras might capture a lecturer at a podium head-turning left-and-right, and equally the audience, without wasting actor time and set-up, up to 60° freedom each if equilaterally (about twice the individual HDDV camera view). And in a camera emergency the odd camera is a hot spare, allowing the tightest shooting schedule to continue.
More specifically a multitude of cameras (and therefor odd numbers are encouraged but no object), each behind a screen such as a dark-transparent facet on a panel or piece of equipment, can capture numerous POV-angles as-if additional imaginary actors up-close (participant voyeurs), or as-if important things also looked-at, or as-if more active head-turning in watching, or even as-if multitudes in audience ... the alter-ego effect, played on the audience (even as viewed by a single audient) all watching the Agonists, together ... and can be alternated freely from the edit console, for a very live feel.
Near-future display technologies include:
1. HDTV-compatible sexichrome, 6-color = 2 eyes × 3-color/eye: It'll mean HDTV's made with either RGBOTM (red, green, blue, orange, turquoise, magenta) or polarised RGBrgb (red, green, blue, 90°-polarized, red, green, blue)... Viewers will wear clear-white safety-goggles, 3-colors filtered-out per eye (* letting the other 3-colors, and natural-blend room-colors, in). In the "off" mode, the 6-colors are more, higher, flat-only resolution, utilizable for line-doubling, etc.
* ('Safety'--so that viewers don't poke an eye, brushing away 3-D whatevers seeming near them showing too-CU.)
* (4-color RGBT [red, green, blue, turquoise] or RGBY [red, green, blue, yellow], as on some contemporary higher-resolution cameras, would suffice a small-range 3D because red and blue are lower spatial density, but without CU.)
2. Flood-focused, utilizing directional pixels, micro-Fresnel-parabolic-flex-mirrors, continuously adjustable, to micro-steer the light. The control signal is fairly mundane, as it is basically a parabolic tilt: a simple voltage ladder supplies each micro-mirror a slightly different tilt coefficient, while bias steers the whole and the end-to-end value is the total depth of focus... the signal source is Raster-Contour-Scan, like TV today but with the depth-of-focus added signal on each object, which the receiver recomputes to check (mask-out) image overlap... Signal-bandwidth is small, giving preference to the nearer image, as edges of the farther image are seen by one-eye first (most adjacently) then by both at some image distance from the nearer: that distance allows for low bandwidth...
(I scripted all my screenplays for digital 3D large format, since beginning ca 1996.)
2010Q4 UPDATE:
The recent advent of the glut of 3D-HDD-MAX-movies (we need more 3D-HDD-MAX-theaters), calls up a few more improvements needed:
CMOS HD camera technology now matches and surpasses CCD with palm-size 2056x1544 30 fps (var.) and small-package 2532x1728 240 fps (max.). Cost-effectiveness opens the door for Independent HDDV feature production: Keep your DVD for shooting locations and auditions, and skip to large theater HD-DVD format now for--
CMOS cameras come (variously) with compression software (e.g. SPIHT wavelet future-compatible with higher resolution; truncatable, scalable: up the data rate.)
Cameras are available B&W or RGBG Bayer-pattern color like contemporary CMOS still-cameras. (RGBG total resolution is half in high-sensitivity green, quarter in red and blue, but 4:3 aspect holds 33% more pixels than 16:9.) Lens is typically c-mount or cs-mount (which can take an adapter for either c-/cs-mount; c- can't do cs-).
Programmable cameras fit in your hand and look like a lens unit attaching a small black box and a plug,-- programmed by loading cropping parameters into its registers; faster 500-4000 fps at lower resolution. CMOS electronics contains its coordinate circuitry on the same one chip, and runs on one power voltage (5V ±10% 2.5W).
Cable is CameraLink™ or RS-644 serial or USB 2.0 or IEEE 1394 "Fire Wire". (USB 2.0 can handle HDDV 1920×1080 at 24 fps.)
Future models, we expect will use: 1) enhanced controllable zoom lenses; anamorphic lenses (to get that 33% T&B cropped into the 16:9 format, from 4:3 camera chips); quickrelease chromatic lens filters from your gadget toolcase; 2) built-in on-chip SPIHT compression to lower your cable data rate.
CAMERAS FOR CINEMA, CINERAMA, HDD-MAX, 3DDV, HDDV-HDTV: (1Mpx and up; 1920×1080 ~ 2.1Mpx)
(NB. nondefinitive sampling-as-found ca 1996-2007, cameras, features, formats, models, variants, upgrades; no attempt to compare them; estimated resolution is column-summarized as-for 16:9 aspect, as half-total for RGBG, RGB... [this documentation is REDUCED to technologies/types, 2020])
(3-CCD; 3-CMOS, 1996-2007)
(3-CCD, 1996-2007)
(3-CCD, 1996-2007)
(3-CCD FT, some CMOS; multiformat 2/3-in. 16:9/4:3-letterbox 1920×1080×4sub, 1996-2007)
(var. 3-CCD/2-CCD/3-CMOS, 1996-2007)
(3 CCD, 1996-2007)
(typ. 3-CCD 1/3-in. native 16x9, 1996-2007)
(4-CCD 2/3-in. RGGB quad-stream HD-SDI or fiber 6G/s, 1996-2007)
(film style digital, 1996-2007)
(Cinema 4K camera, Large-Format, 1996-2007)
(Digital Cinema, 1996-2007)
(CCD, 1996-2007)
(CMOS 2056×1544 [1560] ~ 4:3; 1/2-in.; programmable framerate coverage, 1996-2007)
(2.3 µsec./row; 100×100 4000 fps; and faster, 1996-2007)
(QuadHD 16:9 3840×2160p/i =4×1920×1080 12-bit 30 fps i60; 4× SMPTE 292, XVGA, DVI, 1996-2007)
(3-CCD, 1996-2007)
(new technology, 1996-2007)
(1996-2007)
(2-CCD: B&W + C, 1996-2007)
(direct image sensor: 3-layer, vertically stacked, 1996-2007)
(and more advanced technology announcements since, 1996-2007)
Addressed here are, video-in-video subscene formatting, and the inner-voice feeling:
1) The staples ("brackets") meld a remote scene: E.g. a video phone where the actors are live through a remote camera into the present camera: Not OS-VO (might call it, VIDOver) their image is angled by that video: The video display e.g. a wrist watch can move -- or e.g. the opening scene where the windows and displays portray a [gaggle of giggling women] chased by the starship though not interactively with the crew but cued by their specific actions, dialog, the crew respond interact -- it's a major nontrivial, inserted background: Technically speaking, that vidlink is itself the nouveau actor on-stage.
As a mathematician, engineer, technologist, I think of it as a proven, a device, blackbox, a new toy I can now share with my audience, and we can (screen)play together ....
I worked around to this, then decided to standardize in my works (my article on vidlink technology implicates just about every story, script, screenplay, I've done since 1984, as having interactive remote conferencing of various time-delays, cuing, asymmetries: as part of the testing phase of that concept viability) -- and bracketing seemed the best way to indicate that the actor is already framed, lit. bracketed (similar meaning), contained -- cf the written publication industry where the brackets mean the editor has directly inserted (melded) an additional word or phrase to explain what's going on in the main particle.
I also restrict the already standard MATTE, to the special case where the subscene is TV or cartoon, because live-video is now inexpensive to insert and interact. Tele-Phane/Vid-Link is a standard near-future prop.
2) Another invention I've added is the IV: The inner-voice is a particular kind of VO which feels like "I'm thinking" -- not the suggested thoughts of a narrator, nor a radio playing -- the IV relies on contra-phase stereo audio to give the inner nebulous sound of presences not any directional "here". Thinking is not a hollow echo (though I've imagined the joke was that Hollywood thoughts are developed in garbage cans).
I hope this note clarifies what will be your technical format standard, too.
PS#1: V-ID -- video-identity, video-in-display -- is cute.
PS#2: Or VIV -- video-in-video: like PIP picture-in-picture.
PS#3: Or SOS -- stage-on-stage, screen-on-screen, script-on-script.
PS#4: Or SIS - ibid (sister, sounds better).
What's a good price for a top-quality screenplay, that got rejected because--
$100K?
I spend 9 months on the average with a good sci³-fi screenplay -- and update it continually for years after: tweaking, adding joke-lines, new science references -- I could live on $100K/year (presuming 3 month vacation) -- and given more frequent sales, might consider this viable.
We haven't seen this happen yet, because HDTV sets have only appeared on the store shelves (floors) these passed few months: The first were difficult to view at the sides (rear projection systems have narrow angles of best-view), the interlace lines exhibit a vertical transience artifact, and the better flat-wall plasma displays are now entering at $6K-$10K.
But, the door is within reach: Color TV swept the country in about 2 decades ca 1951-1970. Technology moves faster these days (both selling faster, and, rapidly improving) and electronics prices are still dropping: Equipment production is in-place, and more flexible to reprogram for newer designs -- and those surplus bid-house auctions drive lots of interest.
The movie industry is the original exponential regime: with the producers at the intrinsic instigating infundibulum -- that's where the big money is made (seconded by theatric distributions considered pro cumula).
The significant competitions, were, other producers, and of course audience choice.
Now with the increase in competent screenwriters, and the horde of amateurs, and the applications of screenwriter aids on the now-popular PC (and Macs), the effectual sales are going to be greater, but also more distributed .... That doesn't lessen the curricular skills of screenwriters: In fact, I've been suggesting by my own work, the next major advance in selling movies to the audience, is, authenticity -- i.e. real substance.
Consequently my ARCHAEODUS [First Journey] Adam in Eden, based on newly established Scriptural exegeses followed-up with satellite photography proof, and my Professors' Spring Break (trilogy) unraveling mysteries and theories galore. And my subsequent Comeback Mouse In Uproar Bit expanding on one new science theory at a time (setting this as the most productive and proliferative Hollywood standard) and my all-out comedy The Great Space Race, which at least intends good realistic photography of even the backside of the moon, ... are the first of the new-technology crop.
If we compare the mechanical cost of making movies, we see this also decreasing -- 35mm eqv. HDDV cameras at $100K instead of ArriCam Studio at $200K (plus lighting assistance the more dynamic CCDs don't need), video tape at 2% the cost of celluloid film, instant proofs (minutes not dailies), re-shooting, multi-angle shooting, re-editing, re-mixing, specialized markets, increased environmental locations, increased cgi replacing everything (even the actors at the high end), shortened production schedules, ... the cost of screenwriting must come down to meet the market demand ... back-to-school comes to Hollywood, with lots and lots of homework: turning-out scripts will approach the pulp-industry (except no longer wood-pulp, but would-pulp) -- be grateful to keep 10% of the old Hollywood income (and cars aren't any cheaper, unless you buy the next generation digitalismo: Revolution isn't half what a car should be, but the market will grow).
[NB: The rental cost of e-HDDV has been about $1500/day cf film about $800/day -- that should reverse, but it's already in favor of the e-HDDV with its shortened shooting and production schedules]
While it is common practice to match the camera and the receiver, it is potentially easy (cost and build) and effectual to further resolve the receiver monitor to remove the artifacts introduced by the information processes not specific to the scene detail ... While the camera records all scene objects, and the compression sends the necessary information, the receiver may pixel-interpolate/interpret: At the visual detail density available in HDDV/HDTV we can consider this, the reality-cartoon; and sculpt the display accordingly---keeping the details, but revising the image to clarify only, those details: E.g. a star is, a star ... a line, a line, a grade a grade ... very common among cartoons!
I long ago proposed threble-thread gemming by which diagonals would be smoothed and sharpened, to reduce stair-step artifacting common on large screens: 3x3 subpixels has the central value retained; and gems the edges and corners.
SHARPENING:
Somewhat an AI approach to imaging: Most objects show surface continuity---or at leastwise, digital stair-steps are not approximations to self-similar surfaces! Straight lines (curvilinear) represent more object edges, and less distracting for representation of a detailed source---and should look better: Maybe a bit cartoonish in the minutia, but that's easier to watch.
SMOOTHING:
Just a comment (first/here). The common indexed-color image yields a mottled appearance at low color gamma resolution, but shouldn't (for aesthetics): When a pixel-to-pixel color step is one quantum on the gamma, it should be presumed a crossover or threshold or dither value, and averaged over several pixels in all directions ... any real pointillation should likewise be signalled by multi-quanta leaps from the local trend (polynomial fit), and so rendered sharply!
IMPLEMENTATION OF SMOOTHING
An effectual use of the Gaussian Normal Distribution (cf Binomial Distribution) is for overlap of adjacent pixels: Adjacent pixels rapidly approximate a linear intensity as they near-merge---something like 1.4% flutter at 2 sigma (1.414 sigma on a square grid is 2 sigma diagonal). I once noticed an HP color monitor with pixels apparently elliptical (2-D Gaussian Normal Distribution), and the diagonals exhibited almost no digital stair-stepping ... especially consistent smoothness along curving/curling edges.
[But note theoretically the Gaussian Normal is not the perfect solution]
SHARPENING TECHNIQUES
One of the techniques for improving detail is the fairly recent (2000) implementation of Bayer-pattern doubling the resolution at green, over red and blue---lines done in RGBG offset interleaved (RGRG…/GBGB…/RGRG…/GBGB…/...) thus doubling the green content resolution (that itself contains most of the brightness and detailing), which makes a better match to ocular expectancy over the prior standard RGB. This is currently available in CCD and CMOS photo-imaging sensors, and such cameras built on those, less generally in displays, which have used color-stripe or-triad since the '50's.
In simplest theory, green should be quadripled resolution (1 bit/dimension) over red, and red quadripled over blue, but there is no practical pixel configuration maintaining locally uniform density of the major primary green---unless red and blue were fitted-in as corner-pixels in the camera; however digital technology does not lend to squeeze-fitting---the next-best fit having many-color pixels, averaging in favor of the green-middle spectrum.
Displays can also take advantage of the 3x-4x density of color pixels: Whitish and pastel color edges can be nudged one color pixel at a time (needing color-balance behind) for triple the whitish-detail density ... but colored edges have less to gain: At best, colors may "white-refract" on edges: As specific color pixels are fewer, a passing edge can illumine the off-colors, balance behind producing white (less distracting than letting colors twist) and favoring the proper color: The result is less saturated, pastel.
[Color-stripe restricts to horizontal (or vertical, not both) with less overall improvement]
Interestingly in this aspect, a colored star will be most realistical, as stars do appear as white points (at the center-point of the eye) with color around. A star moving across a display can be given a white center within a wash ring of its proper color ... as regularity and smoothness in motion of a background star is more important to its perception than either its precise color or its smallness, this may be an important factor in display technologies ...
[An unrelated but similar improvement technology is pixel-on-pixel color-layer stacking]
* [Yes, it rhymed with "triple-threat jamming"---(a veritable pun on my former digital electronics employment expertise)]
The ideal match, digitally predictable, is bit-reverse and bit-interleave the x-y indices-display-coordinates (Bit-reversal maps 0001, 0010, 0011, 0100 to 1000, 0100, 1100, 0010; bit-interleaving maps spatial [001 001], [001 010] to linear 000011, 000110) ... getting both spatial dimensions transmitted in that combined linear uniform random-like process.
The result "something like television" (a 1960's sci-fi quip) is not only effectually best-possible random (better than natural random because it never lapses random: never shows adjacent pixels, which natural random does, called, 1/f noise), it shows infinite resolution: as the indices count higher, the bit-reversal definition refines ... ad infinitem.
Imagine watching an image that continually sharpens while you watch, never blurs out at some fixed resolution, though it is as simply digital as the current technology, and though it's not going to happen perfectly, because even natural random photons aren't that sharp in the first place, it'll surely be better than HDTV, no edge-artifacts, no motion artifacts....
The disadvantages are, 1. the scan sweep rate is 500× faster (to get from 0000 to 1000 new instead of to 0001 old), and random-like requiring accurate random access instead of simpler sequential access, 2. image self-correlation coefficients compression is not available--though Chaos Theory suggests there may be some self-similar compressibility.
[See also: the mechanics of screenwriting; writing a log line]