Updates and Polanoid Shot of the Day winners.

Posted by on Aug 18 2020

This is the first post in a very long while!

After migrating this site from blogger to self hosted system, I realised that I have many analoque images that I have not posted on Parahanga, they instead went to Flickr and Polanoid.  While I have no issues with Flickr and Polanoid, I find it important to have the images here on my own site, away from corporate terms and conditions. One need only to look at what happened to PhotoBucket (incidentally I lost a few of my older digital pictures there).

So over the next few months, updates are coming!  They won’t be new photos, but rather photos posted Flickr. In time,  brand new photos may posted here as well.

While winning prizes and awards is not a primary motivator of mine, I thought it would nice to show the six Polanoid shots of the day I’ve won over the last ten years. Some several years after I posted them!

 

2009-12-17-5411198

Shot of the day for 2009-12-17

2010-02-08-9692006

Shot of the day for 2010-02-08

2014-11-17-4700369

Shot of the day for 2014-11-17

2017-12-17-3612147

Shot of the day for 2017-12-17

2018-04-17-6580202

Shot of the day for 2018-04-17

2020-01-07-1550621

Shot of the day for 2020-01-07

 

Parahanga

Posted by on Aug 16 2020

My journey into photography (and life) has gone through many successive waves.

I started learning photography on 35mm. In my mid twenties, I found my passion in digital photography when it was fresh on the scene. It offered a new paradigm to photograpy – moved from chemical processes to to electrical (digital) process. It greatly reduced the barrier of entry for photography; it didn’t require a darkroom: all that was required was digital camera and computer (and now, just digital camera). I loved the simplicity, and when I immigrated to New Zealand, I dived into photography (I even toyed with the idea of becoming professional!).  My website here at vonnagy.com even ranked first for ‘new zealand photography’ gathering up to 35,000 visitors a month in the early 2000’s.

This was a huge period of growth as an artist for me. During this time, I made from friends from all over the world. I’ve had the opportunity meet people in real life. I travelled to Australia, Canada, Poland, Portugal, and Ireland to meet other photographers. Many of these people, such as my friend Troy Moth, have become close friends.

But after several years, my passion for photography dried up.  I was also wrapped up very deeply vested in the start up Online Republic, this was taking up as much as 80 hours or more of my time. In 2007, I  I took my digital camera on a much needed holiday to Europe. I remember taking some  photos in beautiful Northern Ireland. After that, I didn’t pick up my digital camera for a long time. In fact those photos from Northern Ireland are still residing on an unviewed flash card somewhere! My wordpress blog also got hacked via the comment plugin I had, and start pumping heaps of spammy links into my sites and others. I eventually fixed it, but the damage was already done, my site when from 35,000 visitors to about 35 visitors a month afterwards. I just figured my time with photography had run its course. I was nearly 2 years before I picked up a camera again.

Then, I heard that Polaroid was no longer going to manufacture their film. That caught my attention.

I bought a Polaroid camera in an  auction and took a roadtrip to the central North Island to pick it up with my  girlfriend. After the first photograph I took (which was literally a chicken that had crossed a road), I was hooked. . Though I had no preview for how the film would turn out, and quirky nature of polaroids gave me unexpected joy in shooting photographs again. I realised that it was the medium of digital photography had sapped my passion for photography, not photography itself. Film renewed my passion.

With my website in shambles (though I later restored it), I decided to create a new website for my second life as a photographer. I toyed with many new ideas, but the answer came unexpectedly from my father. My father is a photographer as well, he’s taken nature photographs his whole life  and has never deviated from that course. When he visited me in New Zealand, he would always point out to rubbish on the side of the road or flotsam  on the beach. In his view, all I photographed was ‘junk’.

At the time, I was thinking about the Austrian word rubbish (in dialect ‘Klumpert‘). But since I was in New Zealand, I tried to find a local word that had the same meaning. I found the word ‘Parahanga‘ in the Maori Dictionary online and thought it was a suitable reference to my dive into analogue photography:

parahanga 1. (noun) rubbish, litter, scraps, rubbish dump, pollution.

So in 2009, I launched Parahanga.com, and started my commitment to analogue photography. This was a deep dive. During this time, the process of learning was very zen-like. Analogue photography forces to slow down and be patient. You contemplate carefully. In the case of polaroid film, there are 10 or less prints to be made, it was expensive, and the fickle nature of expired film often meant that that not every exposure will turn out. Sometimes whole packs could be ruined. But the results could turn out to be other-worldly. An unlike digital, completed tangible. You could hold them in your hand (not computer screen or iphone).

My success with Online Republic allowed to buy all sorts of crazy and wacky cameras. Although I didn’t have space in my home for darkroom, I soon learned how to at least developed negatives, including colour C-41 negatives in my kitchen. It a labour of passion and I loved that I was involved in nearly aspect of photography. The process didn’t stop at the rendering of light by a CMOS chip or abstract 0’s and 1’s that could be manipulated by PhotoShop. It was a physical object that meant no layer of abstraction with me and the final image.

In addition, I found the community of analogue photographers very endearing; I was quite involved on Flickr and Polanoid, the latter site I won Shot of the Day six times. I relished the days I could go out and take photos again.

But all things change. In 2013, I only had 3 posts on my Parahanga.com site. In early 2014, my mum had a very serious auto accident which shook my world. Very fortunate for us, my mum turned out to be okay, but we decided that we should spend at least part of the year overseas. Photography took a backseat. I posted my last post on Parahanga in June of that year. Over the next couple of years, I sold the majority of my cameras and packs of film.

Previously I stopped photography because lack of passion, but now I didn’t have time. Life happens; however, life does come around in waves.

Its August of 2020, I am a currently overseas waiting to come back to New Zealand. My wife (my girlfriend when I bought my first polaroid camera) and I have decided that after trying hard to balance two countries for last 5 years, to return home to New Zealand for good. I had a look at a parahanga.com the other day, and I decided to migrate it away from the Google corporate blogging platform into wordpress site. I am glad I did not let  parahanga.com expire; the common practice by many search engine spammers is re-acquire old domains, and redress them spam until the domain is not worth anything anymore. Though I am not sure if I will continue my photography on parahanga.com, but at least I can keep it from deteriorating on the web.

Although I am still months away from reaching my camera and film, there is still fuel in my tank for photography. Let’s see what happens.

 

 

Quick guide to Nginx and Swift Perfect server on your local linux machine.

Posted by on May 02 2020

The goal is a to use the Swift Perfect server to use the default “hello world” template, but let Nginx serve static files

I am currently testing out the Swift Perfect Server (eg from [Perfect.org][1]) and Nginx. The idea here is to let Swift Perfect to serve dynamic stuff, and Nginx to serve static stuff (like images, css etc).

This quick & dirty guide assumes the following:

  • You’ve got [Nginx][2] installed and running.
  • You’ve got [Swift][3] installed and working.
  • You’ve got [Perfect Server][4] installed and running

This is for a local configuration, since I don’t have a domain yet, I want to run it locally. So add this line the hosts file

sudo nano /etc/hosts

and add this:

127.0.0.1       swift.local

This will allow you to access swift.local on your laptop or desktop computer.

Let’s create a directory for static files (eg images/pdfs/css ) that your Nginx server will serve.

mkdir -p /var/www/swift.local/html/images

the -p will make sure that all directories along the way will be created. Let’s download an image. I [used this image][5] and renamed it swift.jpeg and placed it in the images folder from above. I set the permissions to 0755:

sudo chmod 0755 /var/www/swift.local/html/images/swift.jpeg

Now that we have directory stucture, lets create virtual hosts file. In my nginx set up, I go to the conf.d directory and create a file called swift.local.conf:

sudo nano /etc/nginx/conf.d/swift.local.conf

and add the following:

server {
        listen 80;

        root /var/www/swift.local/html;
        server_name swift.local www.swift.local;

        access_log /var/www/swift.local/access.log;
        error_log /var/www/swift.local/error.log;

       location / {
                #swift perfect server access
                proxy_pass        http://localhost:8181;
                proxy_set_header  X-Real-IP  $remote_addr;
        }


        # serve static files

        location ~ ^/(images|javascript|js|css|media|static)/  {
          root /var/www/swift.local/html;
          expires 30d;
        }


}

What’s going on here:

  • Our Nginx Server is listening to port 80. If we were on a remote live server, I would port this to 443 for https
  • I’ve stuck my my access/error logs just above the root directory
  • location / : This is where Swift does its magic; the proxy pass goes to port 8181 which is the default Perfect server port. All the dynamic goodies will be here.
  • finally the static files are all service by Nginx and given a 30d caching expiry.

Once you saved this, try the following commands. The first will test your nginx configuration. If anything is wrong with the virtual host you just set up, it should show here. The second will reload your nginx.

sudo nginx -t 
sudo systemctl reload nginx

Finally, create a new PerfectHTTP project anywhere on your local machine. I have build mine in /home//swift/, but you can build this anywhere it makes sense to you. Run the following the commands to create a start project:

git clone https://github.com/PerfectlySoft/PerfectTemplate.git
cd PerfectTemplate
swift build

Now go and edit the the main.swift file. There are 2 changes to make:

  1. add the image code in the responsebody

    1. Remove or comment out the second route handler. Why, because this the work we want nginx to do, not PerfectHTTP: swift
      //routes.add(method: .get, uri: “/**”, handler:
      StaticFileHandler(documentRoot: “./webroot”, allowResponseFilters:
      true).handleRequest)

Here’s the main.swift boilerplate file that has been edited:

  import PerfectHTTP
  import PerfectHTTPServer


func handler(request: HTTPRequest, response: HTTPResponse) {
    response.setHeader(.contentType, value: "text/html")
    response.appendBody(string: "<html><title>Hello, world!</title><body>Hello, world! |<img src=’/images/swift.jpeg’></body></html>")
    response.completed()

}

  var routes = Routes()<br />
  routes.add(method: .get, uri: "/", handler: handler)

  try <span>HTTPS</span>erver.launch(name: "localhost",
                        port: 8181,
                        routes: routes,
                        responseFilters: [
                          (PerfectHTTPServer.HTTPFilter.contentCompression(data: [:]), HTTPFilterPriority.high)])

Almost there! The final step is that we have rebuild the main.swift file, then launch PerfectHTTP:

    swift build
    .build/debug/PerfectTemplate

Now if all went well, you can bring up your browser to http://swift.local and see the “Hello World” served by PerfectHTTPServer and the swift image served by nginx!

Quick Note:

Another way to test how nginx is handling images, is simply stop the perfect server. One the homepage you should get a 502 error from Nginx (because it can’t find the swift server at port 8080 which we stopped). However you should still be able to see the image at http://swift.local/images/swift.jpeg

I had an issue where I couldn’t get additional routes working other than root:

     routes.add(method: .get, uri: "/", handler: handler)

For instance this arbitrary route was giving me an nginx 404:

    routes.add(method: .get, uri: "/scooby/dooby/doo", handler: handler)

The PerfectHTTPServer code looked good; it turned I had this nginx try code under my Nginx server: swift
location / {
#swift perfect server access
proxy_pass http://localhost:8181;
proxy_set_header X-Real-IP $remote_addr;
try_files $uri $uri/ =404;
}

The “try_files $uri $uri/ =404;” by nginx seemed to supercede Perfect’s routing. I deleted the offending line and my Perfect routing again worked. In the future, error handling like 404’s should be done from Perfect, not Nginx.

Exporting from CSV format into Markdown – the long way.

Posted by on Apr 03 2020

{… or how to export from the texpattern cms the hard way}

Kraxn.io recently migrated from textpattern to a static site generator. *

This was a journey, that didn’t happen overnight. In fact, as I couldn’t find any out of the box solutions, it took a couple of weeks before I took a stab at it again. I had over 350 posts, so I could do it manually, or figure out a more automated way to do it. Here are the steps I took to do this:

  • Export a csv from mysql (you can use mysql command line, but used phpMyAdmin export function since I had it installed).
  • Convert each row in of csv into an markdown (.md) file. Each row represents a blog entry.

Before I decided this path, I decide if I could just do a quick & dirty download with wget eg:

wget -mkEpnp http://example.org

Unfortunately – there was too much “dirty html” to clean by running markdown. There are probably ways of doing this using Regex, and perhaps even the php striptags function (in hindsight this could have been a possibility). I would have to put in some effort to know all the html tags I used on the site. So I opted for the export csv method.

Exporting blog data as CSV.

From PHP myadmin:

  • login to phpmyadmin
  • open the database you use click on the table you wish to use
  • click export: method = quick; format = csv ** and download

If you want export from mysql command line, I suggest you try this link.

Generate md files from CSV rows using PHP

Now that we’ve got a csv file with our content, let’s output it as as a .

Below is the script I used, some notes on data hygiene:

  • Download Markdownify and references the scripts.
  • Copy csv to php array for this script.
  • Posted – This is the date posted – this is essential for retaining the post date; I use php touch() to revert to the original post date.
  • Body_html – this the main content
  • Title – this is the title also used for URL, it had some funky characters, therefore the long list of php string functions. Ideally a regex would be faster!
  • Image – the image file which is numbers.

An aside: I had a mix of png and jpg images. I actually wanted all of them to be jpgs. Luckily I had just installed ImageMagick, which could bulk convert images. By converting them to jpgs, my script would also work. Here is the command I used inside my images folder:

sudo mogrify -format jpg *.png

So is here is the php script:

<?php
declare(strict_types = 1);

# https://github.com/Elephant418/Markdownify
require_once ("/Markdownify-master/src/Converter.php");
require_once ("/Markdownify-master/src/Parser.php");


/**

 * Convert a comma separated file into an associated array.
 * The first row should contain the array keys.
 * 
 * Example:
 * 
 * @param string $filename Path to the CSV file
 * @param string $delimiter The separator used in the file
 * @return array
 * @link http://gist.github.com/385876
 * @author Jay Williams <http://myd3.com/>
 * @copyright Copyright (c) 2010, Jay Williams
 * @license http://www.opensource.org/licenses/mit-license.php MIT License
   */


function csv_to_array(string $filename='', string $delimiter=',')
{
    if(!file_exists($filename) || !is_readable($filename))
        return FALSE;

    $header = NULL;
    $data = array();
    if (($handle = fopen($filename, 'r')) !== FALSE)
    {
        while (($row = fgetcsv($handle, 1000, $delimiter)) !== FALSE)
        {
            if(!$header)
                $header = $row;
            else
                $data[] = array_combine($header, $row);
        }
        fclose($handle);
    }
    return $data;

}


$csv = csv_to_array('textpattern.csv',",");


$counter = 0;



    foreach( $csv as $item){

    //collect all data for html body: heading, body html, and image.
    $file_data =
                "<h1>" . $item['Title'] . "</h1>" .
                $item['Body_html'] .
                "<img src='/inc/images/" . $item['Image'] ."t.jpg'>"; 


​    

    $converter = new MarkdownifyConverter;
    $file_data = $converter->parseString($file_data);    

    //title is a bit janky, but working.     
    if($item['Title']){
     $file_name = 
            strtolower ( str_replace ( " ", "-", strip_tags (  str_replace("/", "-", trim ($item['Title']) ) ) ) ).".md" ;    
    }  


​        

        if($file_name != false){

            $file_handler    = fopen($file_name, 'w');

            fwrite($file_handler, $file_data);


            fclose($file_handler);  

             $get_timestamp  = strtotime( $item['Posted'] );   //important  

            touch($file_name, $get_timestamp);

        }    


​         

    }       


?>

Using declare(strict_types = 1); may issue some warnings on on $data[] = array_combine($header, $row);. These can be ignored, or they can be fixed by deleting any columns you don’t in the csv file.


* Of all database driven cms’s, I prefer textpattern over wordpress. In fact I had accumulated over 300 posts on textpattern, and had virtually no issues. I decided to move because I used luapress on a demo site, and fell in love with it. My workflow is much better with Typora and luapress. It just felt natural. I would still recommend textpattern over wordpress.

** alternatively – I noticed there was php_array here too. I haven’t tested it, but it theoretically might have saved some steps.

Update 2020-09-30:  I’ve recently learned of a much easier to export files from MySQL directly into a file just by issuing a few simple SQL commands. See how to save mysql query output to a file.

Of browsers and sausages

Posted by on Mar 18 2020

“Je weniger die Leute wissen, wie Würste und Gesetze gemacht werden, desto besser schlafen sie!” (the less people know how sausages and laws are made, the better they sleep at night!) – Otto von Bismarck

Your web browser may be most complex piece of software on your computer. The original web browser to render html/txt from remote servers: pretty simple. That was 20 years ago. Think about what browsers can do today:

  • Render 3D graphics – with webGL you can have a 3D engine in your browser. You can play all sorts of games and view 3D online today.
  • Trivial to access your computer. You can drag and drop files directly into browser to upload them. Your webcam can be used in your browser for web conferencing.
  • Steaming Multimedia – this was always pretty clunky in the old days (usually) involved downloading files. Today streaming is the standard fair on the internet.
  • Geolocation – Browsers can detect your location – think all the online map tools you use.

Browsers can manage memory, hardware, and almost all the computing processes that an operating system (Windows, IOS, Android) can do.

To show how complex browsers have become, developer downloaded all the specifications for web standards, of which browsers face the brunt of the coding. In his article The reckless, infinite scope of web browsers, Drew Devault counted over 100 million words required by the specifications. He stated that:

I conclude that it is impossible to build a new web browser. The complexity of the web is obscene. The creation of a new web browser would be comparable in effort to the Apollo program or the Manhattan project.

Another developer, Casey Muratori, discusses how the duopoly of operating systems (Windows/Mac for desktops, Android/IOS for mobile) had led to less than optimal results for software:

In all cases, they are maintained by companies whose revenue does not primarily come from the sale of operating systems, and who have many incentives to pursue other goals that delivering the most stable, reliable, trustworthy experience. The operating system is never a product anymore — it is merely something users are forced to use based on the hardware they have chosen, and it is increasingly treated solely as a vehicle for pursuing the platform holders’ other business goals.

Our browser has morphed from a simple html renderer to a multi-tool we use for our lives. What are the consequences for using such a complex piece of software?

Less competition – Independent browser makers can not compete with the complexity. Apple and Google’s ecosystem make it very difficult to compete

More software vulnerabilities – Devault cites that there are over 8000 common vulnerabilities and exposures (CVE) for browsers such as Firefox, Chrome, Safari and IE. The more complicated the code, the more likely it can get exploited. This is a step away from Unix Philosophy axiom stating: Do one thing, and do it well.

Privacy is a second class citizen. Many of the default features of Firefox allow ‘digital breadcrumbs’ to be sent back to servers, other features allow disabled of certain functions (eg allowing sites to disable right mouse click).

A user on Github named 0XDE57 has compiled a large list of of under-the-hood features in the About:Config section for Firefox, some that may break certain websites. However, many of these can give you back some degree of control over your browser.

Another software programmer Andrew Chase compiled a list of modifications for Chrome and Firefox on Github, which is listed on github as well.

Closing thoughts.

Complexity isn’t necessarily a bad thing, however it does make obfuscation easier. The average user will not care how their browsers works, so long as it works.

Going back to Bismarck’s comment – the less you know, the better you can sleep. While some sleep may be lost knowing what your browser is doing under the hood, we should perhaps use that insomnia for making better software.