One can browse to the desired webpage, and 'View Source' and then save the
source with Notepad.
or
If one enables the disk caching feature in QtWeb, then one can browse the
desired webpages, and then get source from the cache dir. Here's a crude hack,
using GNU UNIX utils:
grep -r text\/html QtWebCache\cache\http|gawk '{print $3}'|sed
"s:\\:\/:g;s:^:strings :;s:$: >> qtweb\.cache:"
- find the html pages in the cache dir
- change the backslashes to forward slashes
- use strings.exe to dump the non-binary contents, i.e. page source, to a file
called qtweb.cache
But imagine if one could dump source straight to qtweb.cache with QtWeb from the
command line.
QtWeb.exe -dump
http://somewebsite.com/somewebpage.html > qtweb.cache
QtWeb's usefulness would IMO become enormous. Harmful javascript could be
easily monitored and even 'sterilised' to plain html, on the fly through simple
text manipulation. Mobile users who require access to javascripted pages could
gain access. Blind users could change pages to text, opening up more
possibilities with text-to-speech technology, than with then current cumbersome
screen readers.
These are just three main ways this would be useful. I can think of many
others.
Consider that, with only a very few difficult, faulty and/or obscure exceptions,
*no current browser* can dump a javascripted page from the command line.
QtWeb would be the first.