With the same subject title, I created multiple outcomes via manual drawing and DALL-E-2 with different styles hints.
digital art
natural illustration
oil painting by Rembrandt
manual painting with Fresco (took about an hour)
With the same subject title, I created multiple outcomes via manual drawing and DALL-E-2 with different styles hints.
digital art
natural illustration
oil painting by Rembrandt
manual painting with Fresco (took about an hour)
Recent language-based image generators can output amazing results given the right kind of input prompts (e.g., book and market), which can be tricky to produce and rely on experiences. This led me to predict the emergence of a cottage industry of “image prompt optimization” analogous to search engine optimization, and research of machine learning modules (e.g., another network) that can learn translating ordinary natural language inputs to language-generator prompts that can lead to desired outcomes.
As a pseudo-miracle, I managed to wake up before 5 am for 3 straight days for the virtual SIGGRAPH Asia 2022 PC meetings (ending 10+ am), and set a personal record of reviewing 5 papers as an extra PC member, which, with the original 15 assignments, totaling 20, still less than like 22 I reviewed a few years back (but with fewer extra PC reviews which tend to have more time urgency).
Aaron Hertzmann has a recent post about expectation creep that is worth taking a look for future PC members.
In Concept Art: Break Artist’s Block with Emergent Design – L410, conceptual artist Sam Nielson talked about the common causes and remedies for creative blocks, which I found to be equally applicable to scientific research (or any other creative fields):
Causes:
Antidotes:
I followed the suggested process (about using chunky oil brushes for iterative ideation) to create the visual impression I saw earlier about two white chairs inside woods.
https://youtu.be/bXAl42AY4zA
https://twitter.com/liyiwei/status/1525280830687326209
The traditional way to render a realistic image is from a physics perspective: provide a complete enough description of the scene and simulate light propagation and image formation with sufficient accuracy.
The end goal of physical realism can facilitate the understanding of the physical universe, at the expense of costly computation.
Thus, computer graphics research has been mainly about how to hack this entire process to achieve maximum realism with minimum effort.
A key ingredient for this hack is the (limit of) human perception, where both sensorial and neural passageways cap the amount of generated information beyond which further enhancements cannot be perceived (and efforts wasted).
This “rendering hack” has long preceded computer graphics (or machine computation) in visual art, which are traditionally based on manual efforts and thus impractical for brute force computation.
The end goal of artistic ideal can facilitate the understanding of perceptual mechanisms which in turn provide opportunities for subjective variations (such as different artistic styles and movements).
This is one main reason why I have been practicing drawing, sketching, painting, animation, design, and other creative forms. It is more of an exercise to see than to draw. (Besides, it is fun and diversifies my daily routines.)
“If you can’t explain it simply, you don’t understand it well enough.” – A quote attributed to Albert Einstein.
PS
Recent trends in black-box machine learning further moves in the opposite direction, by making the visual computation even more expensive and opaque than traditional forward rendering in computer graphics.
I look forward to see if machine learning can shed light on the artistic and subjective aspects of image formation.
I saw 11 replies to a Slack thread of a research project and thought “whoa what a lively academic research discussion” and then came in to find out it was all about gift card amount for user study participants.
=D
In a recent research talk to about 100 people, the host suggested me to pre-upload any video files into the conferencing system to minimize network disruptions during live presentation.
Since my slides contain a lot of videos with a mix of online and local files, it would be infeasible to pre-upload them and then scramble to pick them on the fly.
Thus, I pre-recorded my entire presentation during a practice session, uploaded the recorded video, and played it during the meeting with my camera on and lip sync.
Panopto capture seems to work well for presenting from a Chrome tab (with Google slides).
CHI 2022 provides this medium post about giving a remote presentation as well as different styles ranging from very simple to quite fancy:
Feel free to pick a style you like.
I would do voice over only as I try not to let people know what I look like, or even sound like via synthetic voice over.
But if you like people to know you and/or have the talk video better approximate an in-person conference presentation, consider showing your face.
Theme: Rubric. Get a free blog at WordPress.com