<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
<channel>
<title><![CDATA[ Adam Malone ]]></title>
<description><![CDATA[ Mostly technology, sometimes thoughts, stories and ideas. ]]></description>
<link>https://www.adammalone.net</link>

<lastBuildDate>Fri, 13 Mar 2026 08:57:02 +0000</lastBuildDate>
<atom:link href="https://www.adammalone.net" rel="self" type="application/rss+xml"/>
<ttl>60</ttl>

    <item>
        <title><![CDATA[ Pretty damn secure self hosted Bitwarden ]]></title>
        <description><![CDATA[ Every year I spend an afternoon reading through my credit card statement to see whether I&#39;ve accidentally forgotten to unsubscribe from something.

This year was no different, and on my travels through the statement, I stumbled upon my LastPass subscription.

While there are two certainties for everyone in ]]></description>
        <link>https://www.adammalone.net/pretty-damn-secure-self-hosted-bitwarden/</link>
        <guid isPermaLink="false">61e884bdb2367777336b5486</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[  ]]></dc:creator>
        <pubDate>Mon, 03 Oct 2022 10:30:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2022/10/khara-woods-G4ehn6dE7hQ-unsplash.jpeg" medium="image"/>
        <content:encoded><![CDATA[ <p>Every year I spend an afternoon reading through my credit card statement to see whether I've accidentally forgotten to unsubscribe from something.</p><p>This year was no different, and on my travels through the statement, I stumbled upon my LastPass subscription. </p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/content/images/2022/02/Qantas_Money.png" class="kg-image" alt loading="lazy" width="751" height="48" srcset="https://www.adammalone.net/content/images/size/w600/2022/02/Qantas_Money.png 600w, https://www.adammalone.net/content/images/2022/02/Qantas_Money.png 751w" sizes="(min-width: 720px) 720px"></figure><p>While there are two certainties for everyone in life (death and taxes), for me there are also two Sisyphean tasks that I continue to work on:</p><ul><li>Unsubscribing for emails</li><li>Ceasing subscriptions</li></ul><p>Figuring I could save some money here, and the inertia to leave a password manager is possibly even higher than leaving your bank, I felt up to the challenge.</p><h3 id="why-not-lastpass">Why not LastPass?</h3><p>This would have been a good question 5-10 years ago, but honestly I feel like the more relevant question now is <em>Why LastPass?</em></p><p>Since their acquisition by LogMeIn in 2015, the only new feature I've seen is an increase in cost. I've been a premium subscriber for a few years in order to share password folders, but it still irks me that they use predatory practices to stymie usage of their free account.</p><p>The most egregious of which is the recent limitation of one device per free user. Honestly, which person nowadays only has one device? I find it to be an extremely security hindering limitation for a <a href="https://www.lastpass.com/security?ref=adammalone.net">company that states</a>:</p><blockquote>Security is our highest priority at LastPass</blockquote><p>But it's not just price gouging that's given me the ick. It's also the aggregation of security incidents in <a href="https://blog.lastpass.com/2015/06/lastpass-security-notice.html/?ref=adammalone.net">2015</a>, <a href="https://labs.detectify.com/2016/07/27/how-i-made-lastpass-give-me-all-your-passwords/?ref=adammalone.net">2016</a>, <a href="https://blog.lastpass.com/2017/03/important-security-updates-for-our-users.html/?ref=adammalone.net">2017</a>, <a href="https://www.theregister.co.uk/2019/09/16/lastpass_vulnerability/?ref=adammalone.net">2019</a>, and <a href="https://www.reviewgeek.com/72272/the-lastpass-android-app-contains-7-trackers-from-third-party-companies-%F0%9F%98%AC/?ref=adammalone.net">2021</a>. I want my most secure data to remain secure and my trust, the most thing important (currently) in an online world has been shaken. <em>I say currently because zero-trust is neat.</em></p><h3 id="what-are-the-alternatives">What are the alternatives?</h3><p>LastPass isn't the boy without a date at prom. There're a handful of password managers which have gained prevalence over the last few years to choose from from 1Password to KeyPassX to Dashlane. Each has their own benefits, drawbacks, and pricing.</p><p>Ultimately though, I chose to go with Bitwarden and it was down to three reasons:</p><ul><li>Open Source (Despite <a href="https://www.adammalone.net/post/a-year-of-drift/">leaving the FOSS world</a> I'm still a FOSS boy at heart)</li><li>A free account that hasn't been hamstrung into impracticality</li><li>Strong emphasis on security both in <a href="https://bitwarden.com/help/security-faqs/?ref=adammalone.net">system architecture</a> and level of <a href="https://blog.bitwarden.com/bitwarden-completes-third-party-security-audit-c1cc81b6d33?ref=adammalone.net">audit</a> </li></ul><p>While I could have signed up for a free account on the <a href="https://bitwarden.com/?ref=adammalone.net">Bitwarden website</a>, I decided to go full neckbeard and host my own password manager. This was 50:50 living my open source tenets as well as just seeing whether I could.</p><h3 id="how-did-i-install-bitwarden">How did I install Bitwarden?</h3><p>In a word: Ansible.</p><p>In a few more words, I created a new 1GB/1CPU Digital Ocean droplet (<a href="https://m.do.co/c/fee670a0b1ab?ref=adammalone.net">referral link</a>) using Ubuntu 18.04 LTS because I wanted to ensure complete separation between where my passwords are stored and other servers. I also enabled automatic backups because why not right?</p><p>Once the server was provisioned, I SSH'd in and ran the following commands to install Ansible and the packages I'd require.</p><pre><code class="language-bash">apt-get update
apt-get upgrade
add-apt-repository --yes --update ppa:ansible/ansible
apt install ansible
ansible-galaxy install geerlingguy.swap
ansible-galaxy install ahuffman.resolv
ansible-galaxy install geerlingguy.security
ansible-galaxy install geerlingguy.firewall
ansible-galaxy install geerlingguy.ntp
ansible-galaxy install geerlingguy.certbot
ansible-galaxy install geerlingguy.nginx
ansible-galaxy install geerlingguy.postgresql
ansible-galaxy install geerlingguy.postfix
ansible-galaxy install jenstimmerman.vaultwarden
ansible-galaxy install adamruzicka.wireguard</code></pre><p>I did end up having to use the version of <code>jenstimmerman.vaultwarden</code> from GitHub rather than Ansible Galaxy because 0.5 hadn't been pushed. <a href="https://github.com/JensTimmerman/ansible-role-vaultwarden/issues/3?ref=adammalone.net">The author has since fixed that though</a>!</p><p>After this, I created a custom role and placed it the following configuration in  <code>/etc/ansible/roles/yphonius.servername/tasks/main.yml</code> for some of the tweaks I'd need in the server:</p><pre><code class="language-yml"># Create required users and ensure periodic running of Ansible
- name: Ensure typhonius group exists
  group:
    name: typhonius
    state: present

- name: Add the user typhonius
  user:
    name: typhonius
    groups: typhonius,sudo
    create_home: true
    shell: '/bin/bash'

- name: Installing ssh key for typhonius
  authorized_key:
    user: typhonius
    key: "{{ lookup('file', './files/authorized_keys.typhonius.pub') }}"

- name: Add the user bitwarden
  user:
    name: bitwarden
    create_home: false
    shell: '/bin/nologin'

- name: Runs Ansible on cron
  cron:
    name: "Ansible cron"
    state: "present"
    user: "root"
    hour: "15"
    minute: "0"
    job: '/usr/bin/ansible-playbook /etc/ansible/servername.yml'

# Required packages for Certbot
- name: install unzip
  package:
    name: unzip
    state: present

- name: install openresolv
  package:
    name: openresolv
    state: present

# For Wireguard
- sysctl:
    name: net.ipv4.ip_forward
    value: '1'
    state: present
    reload: yes

- sysctl:
    name: net.ipv6.conf.all.forwarding
    value: '1'
    state: present
    reload: yes

- sysctl:
    name: net.ipv4.conf.wg0.route_localnet
    value: '1'
    state: present
    reload: yes</code></pre><p>I then created a <code>servername.yml</code> in <code>/etc/ansible</code> and filled it with the following</p><pre><code class="language-yml">- hosts: localhost
  vars_files:
    - vars/main.yml
  roles:
    - { role: typhonius.servername }
    - { role: geerlingguy.swap }
    - { role: ahuffman.resolv }
    - { role: geerlingguy.security }
    - { role: geerlingguy.firewall }
    - { role: geerlingguy.ntp }
    - { role: geerlingguy.certbot }
    - { role: geerlingguy.nginx }
    - { role: geerlingguy.postgresql }
    - { role: geerlingguy.postfix }
    - { role: jenstimmerman.vaultwarden }
    - { role: adamruzicka.wireguard }</code></pre><p></p><pre><code class="language-yml">ansible_python_interpreter: /usr/bin/python3

# geerlingguy.nginx
nginx_extra_http_options: |
        resolver 1.1.1.1;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

nginx_remove_default_vhost: true
nginx_server_tokens: "off"
nginx_multi_accept: "on"
nginx_listen_ipv6: false
nginx_vhosts:
  - server_name: "servername.adammalone.net"
    listen: "127.0.0.1:443 ssl http2"
    state: "present"
    template: "{{ nginx_vhost_template }}"
    filename: "servername.adammalone.net-https.conf"
    extra_parameters: |
        location / { deny all; }
        ssl_certificate     /etc/letsencrypt/live/servername.adammalone.net/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/servername.adammalone.net/privkey.pem;
  - server_name: "bitwarden.adammalone.net"
    listen: "127.0.0.1:443 ssl http2"
    filename: "bitwarden.adammalone.net-https.conf"
    extra_parameters: |
        ssl_certificate     /etc/letsencrypt/live/bitwarden.adammalone.net/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/bitwarden.adammalone.net/privkey.pem;
        location / {
                proxy_pass http://localhost:8008/;
                proxy_set_header Host $server_name;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
        location /notifications/hub {
                proxy_pass http://localhost:3003;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
        }
        location /notifications/hub/negotiate {
                proxy_pass http://localhost:8008;
        }
        add_header X-Frame-Options SAMEORIGIN;
        add_header Referrer-Policy "strict-origin-when-cross-origin";
        add_header Content-Security-Policy "default-src 'self'; prefetch-src 'self'; connect-src 'self' adammalone.report-uri.com; font-src 'self' data:; frame-src 'self'; img-src 'self' data:; script-src 'self' 'unsafe-inline' ; style-src 'self' 'unsafe-inline'; media-src 'self'; base-uri 'self'; report-to csp-endpoint";
        add_header Report-To '{"group":"csp-endpoint","max_age":31536000,"endpoints":[{"url":"https://adammalone.report-uri.com/r/d/csp/enforce"}]},{"group":"default","max_age":31536000,"endpoints":[{"url":"https://adammalone.report-uri.com/a/d/g"}],"include_subdomains":true}';
        add_header X-Content-Type-Options "nosniff";
        add_header Feature-Policy "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'";

# geerlingguy.ntp
ntp_manage_config: true
  - "127.0.0.1"
  - "::1"

# geerlingguy.security
security_ssh_port: 38387
security_autoupdate_mail_to: "servername@adammalone.net"
security_sudoers_passwordless:
  - typhonius

# geerlingguy.firewall
firewall_allowed_tcp_ports:
  - "38387" # SSH
  - "80" # Certbot
firewall_allowed_udp_ports:
  - "53" # Wireguard
  - "55290" # Wireguard

firewall_additional_rules:
  - "iptables -t nat -A PREROUTING -p tcp -i wg0 --dport 443  -d REDACTED -j DNAT --to-destination 127.0.0.1"
  - "iptables -t nat -A PREROUTING -p udp --dport 53 -j REDIRECT --to-ports 51820 -i eth0"
  - "iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE"
  - "iptables -A INPUT -p tcp -i wg0 --dport 443 -j ACCEPT"

# geerlingguy.postgres
postgresql_users:
  - name: bitwarden
    password: REDACTED

postgresql_databases:
  - name: bitwarden
    owner: bitwarden

# jenstimmerman.vaultwarden
vaultwarden_version: 1.23.1
vaultwarden_webvault_version: 2.25.0
vaultwarden_config:
  DOMAIN: "https://bitwarden.adammalone.net"
  DOMAIN_PATH: ""  # results in a domain of https://example.com/vaultwarden/, needs to start with a '/'
  DATABASE_URL: "postgresql://bitwarden:REDACTED@/bitwarden?host=/run/postgresql/"
  ROCKET_ADDRESS: 127.0.0.1
  ROCKET_PORT: 8008
  SIGNUPS_ALLOWED: false
  SIGNUPS_VERIFY: true
  SIGNUPS_DOMAINS_WHITELIST: 'adammalone.net'
  INVITATIONS_ALLOWED: 'false'
  SMTP_FROM: 'bitwarden@adammalone.net'
  SMTP_FROM_NAME: 'bitwarden'
  SMTP_HOST: smtp.sendgrid.net
  SMTP_PORT: 587
  SMTP_SSL: true
  SMTP_EXPLICIT_TLS: false
  SMTP_USERNAME: apikey
  SMTP_PASSWORD: REDACTED
  SMTP_AUTH_MECHANISM: "Login"
  WEBSOCKET_ENABLED: true
  WEBSOCKET_ADDRESS: 127.0.0.1
  WEBSOCKET_PORT: 3003
  #ADMIN_TOKEN: "REDACTED"

# adamruzicka.wireguard
wireguard_networks:
  - wg0

wireguard_wg0_interface:
  address: 10.10.0.0/16
  private_key: REDACTED
  listen_port: 51820
  post_up: 'iptables -A FORWARD -i %i -j wireguard; iptables -A FORWARD -o %i -j wireguard; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE'
  post_down: 'iptables -D FORWARD -i %i -j wireguard; iptables -D FORWARD -o %i -j wireguard; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE'
  dns: 1.1.1.1

wireguard_wg0_peers:
  laptop:
    public_key: REDACTED
    allowed_ips: 10.10.10.96/32
  mobile:
    public_key: REDACTED
    allowed_ips: 10.10.10.97/32

# ahuffman.ansible-resolv
resolv_nameservers:
  - "1.1.1.1"
  - "1.0.0.1"
resolv_options:
  - "timeout:2"
  - "rotate"

# geerlingguy.certbot
certbot_create_if_missing: true
certbot_admin_email: certbot@adammalone.net
certbot_certs:
 - domains:
     - servername.adammalone.net
 - domains:
     - bitwarden.adammalone.net</code></pre><h3 id="is-it-more-secure">Is it more secure?</h3><p>As a result of the firewalling, the server is for all intents and purposes as locked down as is possible with a single box. Yes, I could have added in jump servers/bastion hosts and further increased complexity but the following was Good Enough™ for my needs.</p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/content/images/2022/03/Untitled-2022-01-05-2343--7-.png" class="kg-image" alt loading="lazy" width="948" height="501" srcset="https://www.adammalone.net/content/images/size/w600/2022/03/Untitled-2022-01-05-2343--7-.png 600w, https://www.adammalone.net/content/images/2022/03/Untitled-2022-01-05-2343--7-.png 948w" sizes="(min-width: 720px) 720px"></figure><p>The only <em>publicly </em>available TCP port is my SSH port which greatly limits the attack surface. In order for me to get access to any of the passwords within Bitwarden, I need to authenticate via WireGuard which will then give me access to NGINX.</p><p>Without authenticating, anyone trying to access the server won't be able to access anything and if they navigate to the Bitwarden URL then the page simply won't load and instead will error out.</p><p>I decided to architect the configuration in this way to provide me with a little extra protection in the event of a Bitwarden vulnerability. If a bad actor isn't able to access the Bitwarden instance, then they won't be able to attack it. This is my way of attempting to use defence in depth.</p><h3 id="am-i-going-to-keep-it">Am I going to keep it?</h3><p>Honestly, probably not. But maybe.</p><p>As a proof of concept super fun, and I've been using it successfully on all my devices for over six months. That being said, I'm probably going to switch over to one of the <a href="https://bitwarden.com/?ref=adammalone.net">Bitwarden hosted plans</a> reasonably shortly for two reasons:</p><ul><li>I don't really have a backup strategy for Bitwarden aside from block level server backups and that's scary. <a href="https://github.com/dani-garcia/vaultwarden/wiki/Backing-up-your-vault?ref=adammalone.net">Some exist</a>, but I would want to write my own as another fun project (which I can then of course open source). The main issue here is lack of pgSQL support in existing repos</li><li>Having to activate Wireguard every time I want to save a new password is actually a massive pain – even if it's only one click</li></ul> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Blackholing Domains with WireGuard ]]></title>
        <description><![CDATA[ Short post incoming because it&#39;s not worthy of a longer one, but more interesting than dropping a tweet.

I noticed that my laptop was still connecting to ad serving domains I&#39;d blackholed in /etc/hosts when I was connected to my WireGuard VPN. Obviously this wasn& ]]></description>
        <link>https://www.adammalone.net/blackholing-domains-with-wireguard/</link>
        <guid isPermaLink="false">632505cd2b33fc1b6b6f7b6f</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[  ]]></dc:creator>
        <pubDate>Fri, 16 Sep 2022 23:46:55 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2022/09/aman-pal-1c2iHG5_MgE-unsplash-1.jpeg" medium="image"/>
        <content:encoded><![CDATA[ <p>Short post incoming because it's not worthy of a longer one, but more interesting than dropping a tweet.</p><p>I noticed that my laptop was still connecting to ad serving domains I'd blackholed in <code>/etc/hosts</code> when I was connected to my WireGuard VPN. Obviously this wasn't great as the point of blackholing them was to ensure my laptop couldn't connect.</p><p>Looking at the <a href="https://manpages.debian.org/unstable/wireguard-tools/wg.8.en.html?ref=adammalone.net">official WireGuard docs</a>, I couldn't see anything that pointed me in the right direction. The <a href="https://github.com/pirate/wireguard-docs?ref=adammalone.net">unofficial docs</a> were better, but didn't have much about the <code>DNS</code> line in <code>wg0.conf</code>. </p><p>Before I begun, my <code>wg0.conf</code> looked like this, with DNS provided by Cloudflare.</p><figure class="kg-card kg-code-card"><pre><code>[Interface]
PrivateKey = &lt;SNIP&gt;
PostDown = iptables -D FORWARD -i %i -j wireguard; iptables -D FORWARD -o %i -j wireguard; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
Address = 10.10.0.0/16
PostUp = iptables -A FORWARD -i %i -j wireguard; iptables -A FORWARD -o %i -j wireguard; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 52880
DNS = 1.1.1.1

[Peer] # laptop
PublicKey = &lt;SNIP&gt;
AllowedIPs = 10.10.10.10/32

[Peer] # phone
PublicKey = &lt;SNIP&gt;
AllowedIPs = 10.10.10.11/32</code></pre><figcaption>wg0.conf</figcaption></figure><p>After a few tries with multiple <code>DNS</code> entries and separators, I found that to block domains effectively, I simply needed to add them to the <code>DNS</code> config line, separated by <code>;</code>. This means that my <code>DNS</code> entry became as follows and those domains were sequestered in the darkness.</p><figure class="kg-card kg-code-card"><pre><code>DNS = 1.1.1.1; 0.0.0.0, example1.com; 0.0.0.0, example2.com</code></pre><figcaption>Multiple DNS entries and lookup servers</figcaption></figure> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ My first NFT with ENS and IPFS ]]></title>
        <description><![CDATA[ Ok, so this isn&#39;t my first NFT
[https://en.wikipedia.org/wiki/Non-fungible_token], but the current title hits a
lot harder than:

&gt; My first NFT that isn&#39;t an overpriced JPEG or playing card from an online game
Remaining up-to-date with technology as it evolves, ]]></description>
        <link>https://www.adammalone.net/my-first-nft-with-ens-and-ipfs/</link>
        <guid isPermaLink="false">61da2fd7e82f277765e36a6a</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 29 Jan 2022 07:30:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2022/01/shubham-dhage-_rZnChsIFuQ-unsplash.jpeg" medium="image"/>
        <content:encoded><![CDATA[ <p>Ok, so this isn't my first <a href="https://en.wikipedia.org/wiki/Non-fungible_token?ref=adammalone.net">NFT</a>, but the current title hits a lot harder than:</p><blockquote>My first NFT that isn't an overpriced JPEG or playing card from an online game</blockquote><p>Remaining up-to-date with technology as it evolves, albeit to a reduced depth <a href="https://www.adammalone.net/post/a-year-of-drift/">concordant with my gradual tech withdrawal</a>, is something I've <a href="https://www.adammalone.net/post/running-ghost-on-tor/">blogged about previously</a>.</p><p>I've had my eye on the Ethereum Name Service (<a href="https://docs.ens.domains/?ref=adammalone.net">ENS</a>) for a while; especially since I'm an accidental domain collector for side projects that I complete 80% of.</p><p>My interest in the ENS rose even further over the last couple of months after seeing a lot more people from crypto Twitter change their name to &lt;username&gt;.eth.</p><p>This weekend, with New Year out of the way I decided to register my own domain, and stencil ownership of it immutably into the blockchain.</p><p>After doing some cursory research into the steps I'd need to take, I jotted down my next steps so I'd have a rough plan to work to:</p><ol><li>Set up an IPFS node to host my ETH website</li><li>Add the ETH website content to my IPFS node so it could be distributed</li><li>Register the domain</li><li>Associate the content hash of my website with the domain</li></ol><h3 id="why-a-domain">Why a domain?</h3><p>The choice to purchase a domain NFT rather than a JPEG like everyone else was also pretty clear to me. While there's a lot of hype (and a lot of money changing hands) in the profile picture space, I'm treating the majority of picture collections as proofs of concept.</p><p>The real benefit of an NFT comes from provable and irrevocable ownership of <em>something </em>and a domain is an example of something I would use rather than just hodl. Whilst I think we're still early in the crypto hype cycle, and NFTs are pumping right now, there's a lot both to come and to stabilise. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.adammalone.net/content/images/2022/01/image-3.png" class="kg-image" alt loading="lazy" width="1280" height="871" srcset="https://www.adammalone.net/content/images/size/w600/2022/01/image-3.png 600w, https://www.adammalone.net/content/images/size/w1000/2022/01/image-3.png 1000w, https://www.adammalone.net/content/images/2022/01/image-3.png 1280w" sizes="(min-width: 720px) 720px"><figcaption>Hype Cycle of Blockchain 2021 https://blogs.gartner.com/avivah-litan/2021/07/14/hype-cycle-for-blockchain-2021-more-action-than-hype/</figcaption></figure><p>Over the next few years, there'll likely be more concrete use cases for what today seems like a fad. I'm personally very interested in <a href="https://en.wikipedia.org/wiki/Zero-knowledge_proof?ref=adammalone.net">zero-knowledge proofs</a> as that will completely change the way authentication is completed – say goodbye passwords!</p><h2 id="ipfs">IPFS</h2><h3 id="ansible">Ansible</h3><p>The reason I decided to host my own IPFS node was twofold:</p><ol><li>I didn't want to pay a third party</li><li>I wanted to learn how to do it so I'd be able to talk about it</li></ol><p>It was startlingly easy to set up IPFS on one of my servers by including <a href="https://galaxy.ansible.com/andrewrothstein/ipfs?ref=adammalone.net">andrewrothstein's ipfs role</a> and then tweaking a custom role of my own to run the service in the background as a daemon.</p><figure class="kg-card kg-code-card"><pre><code class="language-ansible">- name: Add systemd service for ipfs daemon.
  copy:
    src: ipfs.service
    dest: /etc/systemd/system/ipfs.service
    owner: root
    group: root
    mode: '0644'

- name: Make sure ipfs service is running
  systemd:
    name: ipfs
    state: started
    enabled: yes</code></pre><figcaption>main.yml for my ipfs role</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-systemd">[Unit]
Description=IPFS Daemon
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=simple
ExecStart=/usr/local/bin/ipfs daemon --enable-namesys-pubsub --enable-gc --init --init-profile=server --routing=dhtclient
User=adam

[Install]
WantedBy=multi-user.target</code></pre><figcaption>ipfs.service definition file</figcaption></figure><p>As an optional addition, I decided to be a good netizen by opening up port 4001 so my node would play a more active role in the network <em>and</em> so in the event that no other nodes pin my content, it can always be found.</p><p>Whilst I didn't do this for speed, another good recommendation would be to create a <em>service user</em> called ipfs with a nologin shell.</p><h3 id="content">Content</h3><p>Creating and pinning the content was also super easy. While I could have created an entire website to be hosted in ipfs, I surmised that the most valuable thing to do would be to create a redirect to this site.</p><p>The HTML I wrote was extremely basic, and in hindsight should probably have used a meta refresh tag rather than JavaScript to be more inclusive to <a href="https://noscript.net/?ref=adammalone.net">noscript</a> users.</p><figure class="kg-card kg-code-card"><pre><code class="language-html">&lt;!DOCTYPE html&gt;
&lt;html lang="en" dir="ltr"&gt;
    &lt;head&gt;
        &lt;meta charset="utf-8"&gt;
        &lt;title&gt;adammalone.eth&lt;/title&gt;

        &lt;script&gt;
            window.location.href = "https://www.adammalone.net"
        &lt;/script&gt;
    &lt;/head&gt;
&lt;/html&gt;</code></pre><figcaption>The index.html file which would become my ETH website.</figcaption></figure><p>Taking this file, I added it to ipfs and pinned so it would persist and prevent it from being garbage collected.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">adam@ipfs:~$ ipfs add ~/ipfs/index.html
 added QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh index.html
 243 B / 243 B [===========================] 100.00%
 
adam@ipfs:~$ ipfs pin add /ipfs/QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh
pinned QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh recursively</code></pre><figcaption>Adding and pinning my file.</figcaption></figure><h3 id="testing">Testing</h3><p>In order to test that everything worked as expected, I wanted to use the web UI to check my commands had the desired outcome.</p><p>To do this without opening UI ports to the internet, I created an SSH tunnel and forwarded my local ports to the remote ipfs node.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">ssh -L 5001:127.0.0.1:5001 ipfs.example.com</code></pre><figcaption>Command required to tunnel local port 5001 to remote port 5001.</figcaption></figure><p>The SSH tunnel allows me to connect to port 5001 on my local laptop and have that securely forwarded through the tunnel to port 5001 on my remote server. So when my browser navigates to 127.0.0.1:5001, it's actually from the frame of reference of the remote server.</p><p></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.adammalone.net/content/images/2022/01/Untitled-2022-01-05-2343--4-.png" class="kg-image" alt loading="lazy" width="966" height="453" srcset="https://www.adammalone.net/content/images/size/w600/2022/01/Untitled-2022-01-05-2343--4-.png 600w, https://www.adammalone.net/content/images/2022/01/Untitled-2022-01-05-2343--4-.png 966w" sizes="(min-width: 720px) 720px"><figcaption>How SSH tunneling works.</figcaption></figure><p>From there, I was able to open a browser on my local laptop and navigate to <a href="http://127.0.0.1:5001/webui?ref=adammalone.net">http://127.0.0.1:5001/webui</a> to confirm the daemon was working and confirm that my pinned files existed and the content matched.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://www.adammalone.net/content/images/2022/01/____Pins___IPFS-1.png" width="1701" height="571" loading="lazy" alt srcset="https://www.adammalone.net/content/images/size/w600/2022/01/____Pins___IPFS-1.png 600w, https://www.adammalone.net/content/images/size/w1000/2022/01/____Pins___IPFS-1.png 1000w, https://www.adammalone.net/content/images/size/w1600/2022/01/____Pins___IPFS-1.png 1600w, https://www.adammalone.net/content/images/2022/01/____Pins___IPFS-1.png 1701w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://www.adammalone.net/content/images/2022/01/_ipfs_QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh___IPFS-1.png" width="1705" height="580" loading="lazy" alt srcset="https://www.adammalone.net/content/images/size/w600/2022/01/_ipfs_QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh___IPFS-1.png 600w, https://www.adammalone.net/content/images/size/w1000/2022/01/_ipfs_QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh___IPFS-1.png 1000w, https://www.adammalone.net/content/images/size/w1600/2022/01/_ipfs_QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh___IPFS-1.png 1600w, https://www.adammalone.net/content/images/2022/01/_ipfs_QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh___IPFS-1.png 1705w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>Testing my files existed using the IPFS web UI.</figcaption></figure><blockquote>When using the IPFS explorer, sometimes pinned files aren't shown in the files tab. To show them, manually alter the URL from .../#/files to .../#/pins</blockquote><h3 id="dht">DHT</h3><p>I was curious about how a file on my node could be located and accessed by other people using only a content identified (<a href="https://docs.ipfs.io/concepts/content-addressing/?ref=adammalone.net">CID</a>) and it turns out I'd used something similar previously.</p><p>Over in my blog titled <em><a href="https://www.adammalone.net/post/how-i-got-into-technology/">How I got into technolog</a>y</em>, I discussed my use of the Direct Connect (DC) protocol. It turns out both DC and IPFS use <a href="https://docs.ipfs.io/concepts/dht/?ref=adammalone.net">Distributed Hash Tables</a> (DHT) as the mechanism of publishing to the world who has what content (and how to get there).</p><p>Turns out DHT is pretty cool.</p><p>Any content you make available on IPFS gets cryptographically hashed and assigned an associated content ID (CID). That means:</p><ul><li>Any difference in the content will produce a different CID and</li><li>The same content added to two different IPFS nodes using the same settings will produce <em>the same CID</em>.</li></ul><p>In my case, the cryptographic hash for my file is <code>QmRw4UV4UukUydbKXshFD9UwghpWZeFYd4Nsrp4UChSSEh</code></p><p>As with DC, peers are connected together within the IPFS network. The lookup algorithm connects to the 10 closest peers and asks who their closest peers are to the CID we're looking for. Traversing the list of peers and eventually finding a peer that has the content.</p><p>This of course looks super cool when visualised. From my home node, I queried the hash of the file I've stored in my public IPFS node and used <a href="https://research.protocol.ai/blog/2021/a-visualization-tool-for-the-ipfs-dht/?ref=adammalone.net">Protocol Labs' code</a> to chart the queries. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.adammalone.net/content/images/2022/01/image-2.png" class="kg-image" alt loading="lazy" width="1534" height="901" srcset="https://www.adammalone.net/content/images/size/w600/2022/01/image-2.png 600w, https://www.adammalone.net/content/images/size/w1000/2022/01/image-2.png 1000w, https://www.adammalone.net/content/images/2022/01/image-2.png 1534w" sizes="(min-width: 720px) 720px"><figcaption>A visualisation of how a CID is located using DHT</figcaption></figure><h3 id="a-few-challenges">A few challenges</h3><p>I found that after running <a href="https://github.com/ipfs/go-ipfs?ref=adammalone.net">go-ipfs</a> on my node for a couple of weeks, I noticed that my server resources were being expended almost entirely by the daemon. I could have tried was the <code>nice</code> sledgehammer approach, but a little bit more research indicated that setting <code>systemd</code> configuration options may also work. By using <code>--init-profile=server --routing=dhtclient</code> we instruct the IPFS daemon to disable local host discovery and use DHT in client only mode.</p><p>I was tempted to change the init profile to <code>lowpower</code>, but the server resource issues cleared up with the above options.</p><h2 id="ens">ENS</h2><h3 id="registering-my-domain">Registering my domain</h3><p>Once I'd created the payload, it was easy enough to register on <a href="https://ens.domains/?ref=adammalone.net">ens.domains</a> and sign the transaction to take ownership of the DNS address I'd requested. The annoying majority of the price however was gas fees, and another reason why <a href="https://ethereum.org/en/developers/docs/scaling/layer-2-rollups/?ref=adammalone.net">Layer 2 Rollups</a> (L2s) are something I'm learning more are a way to reduce adoption blockers.</p><p>I opted to purchase my domain for 10 years at a cost of 0.002Ξ per year (plus gas).</p><h3 id="linking-domain-website">Linking domain &amp; website</h3><p>After ENS had detected my payment for the domain and the token was attached to my wallet, I executed a different function on the smart contract to change the content of the token to point to my CID. This can now be observed on <a href="https://etherscan.io/enslookup-search?search=adammalone.eth&ref=adammalone.net">etherscan</a> along with any other metadata I decide to include.</p><p>This now means I have of course joined all the other crypto sheep and changed my Twitter name to adammalone.eth.</p><h2 id="for-the-future">For the future</h2><p>As was discussed in my <a href="https://www.adammalone.net/post/running-ghost-on-tor/">Ghost on Tor blog</a>, this website is now available on the tor network. In addition to tor, it's now (sort of) available on IPFS.</p><p>While a user can navigate to <a href="https://adammalone.eth/?ref=adammalone.net">adammalone.eth</a> (or <a href="https://adammalone.eth.link/?ref=adammalone.net">adammalone.eth.link</a> if your browser isn't IPFS enabled), the actual IPFS part is merely a redirect to the .net domain.</p><p>For me to run the website <em>entirely</em> on IPFS, I'll need to start looking more into <a href="https://docs.ipfs.io/concepts/ipns/?ref=adammalone.net">IPNS</a> as this will allow me to update the website and content without continually having to update the CID within my ENS token.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ A Year at Drift ]]></title>
        <description><![CDATA[ I know a few of the rough themes that chart my career story, and one of those is
following my passions; regardless of how those passions evolve. 


--------------------------------------------------------------------------------

I think it&#39;s pretty common to think of everyone&#39;s life as being a story of many
arcs. Just ]]></description>
        <link>https://www.adammalone.net/a-year-of-drift/</link>
        <guid isPermaLink="false">6027b71ced39df06c1ce882e</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 09 Jan 2022 10:20:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1509773896068-7fd415d91e2e?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;MnwxMTc3M3wwfDF8c2VhcmNofDN8fG5pZ2h0JTIwc2t5fGVufDB8fHx8MTY0MTcyMzcwOA&amp;ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>
        <content:encoded><![CDATA[ <p>I know a few of the rough themes that chart my career story, and one of those is following my passions; regardless of how those passions evolve. </p><hr><p>I think it's pretty common to think of everyone's life as being a story of many arcs. Just like any good book, TV series, or movie, the plot involves one long and evolving character mega-arc. Within the mega-arc are arcs and sub-arcs.</p><p>Everyone has their own unique stories, with different combinations of arcs and sub-arcs throughout their life. Many people would define their school-life, university (if they go), each of their serious relationships, children (if any), home-ownership, retirement etc as individual arcs. Sub-arcs within these arcs would be smaller storylines, perhaps like a year studying abroad, or an individual project working with a specific team.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2021/02/life-arc.png" class="kg-image" alt srcset="/content/images/size/w600/2021/02/life-arc.png 600w, /content/images/size/w1000/2021/02/life-arc.png 1000w, /content/images/size/w1600/2021/02/life-arc.png 1600w, /content/images/size/w2400/2021/02/life-arc.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>An example of the life mega-arc, arc, and sub-arcs.</figcaption></figure><p>I like to think of both my life and career as stories, which means it's important for me to be able to craft a good narrative. A mentor previously advised me that the story itself was far more important than the reality and a little embellishment was requisite artistic license.</p><p>I've always liked the work <em>raconteur</em> for this reason, because the story and the telling of are the <em>raison d'être</em> for this tenuous quiddity analogy.  </p><blockquote><em>noun,</em> <em>plural</em> <strong>rac·on·teurs</strong>  [rak-<em>uh</em>n-<strong>turz</strong>; <em>French</em> ra-kawn-<strong>tœr</strong>]. A person who is skilled in relating stories and anecdotes interestingly.</blockquote><p>I love hearing other people's (abridged) life stories, how they approached their junctures and decided on the paths they took, because ultimately it's a record of how they came to exist in that present moment.</p><p>Bastardising <a href="https://en.wikipedia.org/wiki/Ren%C3%A9_Descartes?ref=adammalone.net">Descartes</a>, and far less of an aphorism than the original: <em>ego eram ergo sum </em>or <em>I was therefore I am</em></p><h2 id="back-to-my-story">Back to my story</h2><p>The past year has been both a continuation of the pandemic-derived <em>folie à plusieurs.</em> It also marks a year since taking another new path which made me think it was worth a personal, if public, retrospective.</p><p>I'd originally written the majority of this blog last year, but ultimately kept it in draft because I wanted to look back rather than forward given I'm more of an expert in my history than divination.</p><p><a href="https://pubmed.ncbi.nlm.nih.gov/18156588/?ref=adammalone.net">Plus, it's proven I don't care about future me</a>.</p><p>With that in mind, I've added a few notes based on my first year at Drift as footnotes for when I finally draw up all my arcs.</p><h3 id="1-staying-closer-to-customers-was-right">1. Staying closer to customers was right</h3><p>I wasn't sure what the right role would be for me:</p><ul><li>A strong foundation in technology would have allowed for a role in engineering - less hands on but instead enabling</li><li>A background in presales would allow a future in presales</li><li>Delivery is delivery is delivery</li></ul><p>Couple the above with partnering and sales that I'd developed in conjunction with previous roles, and there were a few options. Counterintuitively, however, this just led to <a href="https://en.wikipedia.org/wiki/Analysis_paralysis?ref=adammalone.net">decision paralysis</a> about the right pathway to proceed down.</p><p>Another mentor advised me to stay as close to customers as I possibly could. They are, after all, the reason for being, and why everyone in an organisation has a job.</p><p>This proved to be prescient.</p><p>I've really enjoyed taking the lead in development of new customer relationships, learning about their organisations, and helping both my customers and the companies they work for become successful.</p><p>Staying closer to customer problems, decisions, and solutions is also having a multiplicative effect on my network. I'm meeting more people more often and gaining a lot more insight into how each of them is building successful revenue streams. This in turn, helps me to craft better stories and provide better advice in each subsequent interaction.</p><h3 id="2-being-technology-adjacent-is-hard">2. Being technology adjacent is hard</h3><p>I've been on a journey to abstract myself from technology pigeon-holing for a while. This was the year that I finally cut the cord that had been growing tenuously thin and took myself out of my niche.</p><p>I <em>generally</em> have a rule to never go backwards, and this applies to companies I've worked for, roles I've had, and technologies I've used.</p><p>That all being said, it's always difficult to leave the <a href="/post/personal-resets/">comfort zone</a>. If I've previously done something, there's a non-zero chance I now know how to do it to a reasonable level of capability.</p><p>It's definitely been a challenge over the last year to <em>not</em> dive into the technology I'm working with as much. After all, that's not my job anymore.</p><p>One concession I'll make though is that while I need to rely on more technically savvy people to support my customers, having a base level understanding of the tech is never a bad thing and imbues the relationship with a lot of trust.</p><h3 id="3-product-over-implementation">3. Product over implementation </h3><p>After working in each of the big three factions when it comes to technology implementation, I feel like I have a far better understanding now of the benefits and drawbacks to each.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2022/01/Untitled-2022-01-05-2343--2-.png" class="kg-image" alt="A venn diagram of three intersecting circles containing the words vendor, in-house- and implementor."><figcaption>The technology implementation triad</figcaption></figure><p>Working in-house is great if you want to get stuck in to really long term initiatives and see them through from inception until the end. As a result, your subject matter expertise is pretty high.</p><p>As an implementor, you can ride along for part of these initiatives and deliver high quality outcomes as part of a project. Typically seeing 1-4 clients in a year and being exposed to different use cases gives a range of experience to a moderate amount of depth.</p><p>Vendors support a larger number of customers annually, and hold shallower knowledge across an entire range of topics - crudely diagrammed below.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2022/01/Untitled-2022-01-05-2343--3-.png" class="kg-image" alt="A line graph showing depth vs breadth of knowledge. In-house have low breadth but high depth, implementors are in the middle, and vendors have low depth but high breadth." srcset="/content/images/size/w600/2022/01/Untitled-2022-01-05-2343--3-.png 600w, /content/images/size/w1000/2022/01/Untitled-2022-01-05-2343--3-.png 1000w, /content/images/2022/01/Untitled-2022-01-05-2343--3-.png 1262w" sizes="(min-width: 720px) 720px"><figcaption>Depth vs Breadth of knowledge</figcaption></figure><p>After having deep technical knowledge previously, I prefer the variety of knowing a little bit about a lot, and in being a <a href="https://en.wikipedia.org/wiki/Jack_of_all_trades,_master_of_none?ref=adammalone.net">master of nothing</a>.</p><h3 id="4-i-can-t-wait-to-travel-again">4. I can't wait to travel again</h3><p>As I've referenced in point 1, working with customers is an absolute blast. Putting on my consultant tricorn and diving into problems (whiteboard optional but recommended) is where I feel like I flow best.</p><p>The transition to purely digital interaction for a <em>mostly</em> extroverted personality type has been a challenge. I find that the staccato, Zoom pierced day is mirrored by equivalent frenzied moments of productivity which in itself is more exhausting than a longer, busier day meeting customers in person.</p><p>The other thing that I'm lacking are the intangibles outside of scheduled conversations. Corridor conversations, coffees, and casual catch-ups.</p><p>I cannot wait until I can both meet the customers I've worked with this year and start to introduce myself to new customers I'll work with over 2022. Despite being a proponent of service digitisation, we do not have a truly comparable solution yet.</p><hr><p>In summary, the change has been good, the challenges fun to surmount, and can't wait to see what 2022 will bring.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="/content/images/2022/01/IMG_0725-1.JPG" class="kg-image" alt="A Qantas A380 at Sydney Airport" srcset="/content/images/size/w600/2022/01/IMG_0725-1.JPG 600w, /content/images/size/w1000/2022/01/IMG_0725-1.JPG 1000w, /content/images/size/w1600/2022/01/IMG_0725-1.JPG 1600w, /content/images/size/w2400/2022/01/IMG_0725-1.JPG 2400w" sizes="(min-width: 1200px) 1200px"><figcaption>See you soon, I hope.</figcaption></figure> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Running Ghost on Tor ]]></title>
        <description><![CDATA[ Recently I&#39;ve had the opportunity to play with some new and existing
technologies as a mechanism of both upskilling and trying something new.

I decided to spend some of that time learning how to create a hidden service,
and make my own blog available over the Tor network. ]]></description>
        <link>https://www.adammalone.net/running-ghost-on-tor/</link>
        <guid isPermaLink="false">5ff519005ee594657840fbca</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 07 Jan 2021 10:30:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1570581118391-7a6f30b69b16?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;MXwxMTc3M3wwfDF8c2VhcmNofDd8fGdob3N0fGVufDB8fHw&amp;ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>
        <content:encoded><![CDATA[ <p>Recently I've had the opportunity to play with some new and existing technologies as a mechanism of both upskilling and trying something new.</p><p>I decided to spend some of that time learning how to create a hidden service, and make my own blog available over the Tor network. Although this task was mainly as a proof of concept so I could say I've done it, I did have a tiny desire to be lucky and brute force an awesome vanity Tor .onion address.</p><h3 id="creating-my-address">Creating my address</h3><p>Remembering that I had some free Google Cloud Platform (GCP) credits, I spun up some servers and cloned <a href="https://github.com/cathugger/mkp224o/?ref=adammalone.net">mkp224o, the vanity address generator for ed25519 onion services</a>, onto each of them. I opted for CPU optimised instances as this was my limitation using the tool.</p><p>Knowing that a 10-character prefix (adammalone) was potentially <a href="https://github.com/cathugger/mkp224o/issues/27?ref=adammalone.net#issuecomment-568291087">out of my computational reach</a>, I decided to temporarily calculate an easier hash with a shorter known prefix. This would allow me to move on to the next step in my PoC while the GCP servers continue churning away for as long as I still have free credits. </p><p>I ended up calculating a hash with the prefix <em>amalone</em> in an unbelievably lucky 30 seconds.</p><p>This blog post provides some further recommended reading for people more interested in how <code>.onion</code> hostnames can be generated.</p><h3 id="installing-tor">Installing tor</h3><p>Continuing the trend I've written about in my previous blog posts, I had to find a way to install and configure everything using Ansible. Ultimately I ended up using <a href="/p/3115a46c-6ff5-4dd1-bd46-3ec3f7cd37cd/haghighi_ahmad/tor">haghighi_ahmad.tor</a> with some pretty minor configuration in <code>vars/main.xml</code></p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">tor_proxy: false
tor_http_port: "80"
tor_nickname: "torphonius"
tor_run_as_daemon: true
tor_user: "debian-tor"
tor_group: "debian-tor"
tor_hidden_service_dir: "/var/lib/tor/hidden_service"
tor_hidden_services: 
  - name: http
    version: 3
    port: 80
    host: "127.0.0.1:88"</code></pre><figcaption>Configuration variables for Ansible Tor role.</figcaption></figure><p>This configures tor to listen on port 80 and to forward requests through to port 88 where Nginx is listening for it. Whilst there is an existing Nginx server block using port 80, I wanted to segregate Tor further.</p><blockquote>N.B. Tor can listen on port 80 at the same time as Nginx. The reason for this is that Tor isn't binding to an external interface as Nginx does.</blockquote><p>The above configuration provides me with an <code>/etc/tor/torrc</code> that looks like the below:</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">RunAsDaemon 1
SOCKSPort 0

# Hidden services
HiddenServiceDir /var/lib/tor/hidden_service/http
HiddenServiceVersion 3
HiddenServicePort 80 127.0.0.1:88</code></pre><figcaption>My server's /etc/tor/torrc.</figcaption></figure><h3 id="configuring-ghost-to-listen-to-an-onion">Configuring Ghost to listen to an onion</h3><p>One of the limitations of Ghost is its <a href="https://github.com/TryGhost/Ghost/issues/11595?ref=adammalone.net">inability to respond to multiple different domains</a>. The domain that each Ghost blog uses to serve pages is hardcoded in <code>config.production.yml</code> and any attempt to access the site with a different URL leads to either redirects or errors.</p><p>As Tor uses a different way entirely of representing a hostname, this blog would need to be accessible using two entirely separate combinations of words in the browser's URL bar. I tried initially to use some <a href="https://nginx.org/en/docs/http/ngx_http_proxy_module.html?ref=adammalone.net#proxy_set_header">clever Nginx configuration</a> to point requests coming in on the <code>.onion</code> domain to the clearnet domain by rewriting on the way in and out.</p><p>That unfortunately proved fruitless since I assume Ghost uses the URL configured in <code>config.production.yml</code> to create links and routes rather than the <code>Host</code> header associated with incoming requests. This approach is good from a security perspective, but the limit of a single domain makes this sort of implementation challenging.</p><p>I eventually settled on creating a shadow install of Ghost for the <code>.onion</code> domain that would mirror the clearnet domain. I achieved this by creating a new directory for the Tor install and symlinking <code>content</code>, <code>current</code>, <code>system</code>, and <code>versions</code> directories to the clearnet install. I copied across <code>config.production.yml</code> and changed <strong>only</strong> the <code>url</code> and <code>server: port</code> values.</p><blockquote>N.B. MySQL details should remain the same as we're reading from the same clearnet database regardless of which install the user accesses.</blockquote><p>I could potentially use the <code>.onion</code> hostname as my production URL and then construct another <a href="https://workers.cloudflare.com/?ref=adammalone.net">Cloudflare Worker</a> to <a href="https://developers.cloudflare.com/workers/examples/rewrite-links?ref=adammalone.net">rewrite links</a> and <a href="https://developers.cloudflare.com/workers/examples/alter-headers?ref=adammalone.net">alter request/response</a> <code>Host</code> headers for anyone coming in on the clearnet domain, but that seemed like too much work.</p><h3 id="fitting-it-all-together">Fitting it all together</h3><p>After learning about the fantastic drawing tool <a href="https://excalidraw.com/?ref=adammalone.net">Excalidraw</a>, I felt it only appropriate to draw a pretty picture to show how users may reach the server over either HTTPS or Tor.</p><p>What can be seen in the diagram below is that users browsing over standard HTTP/S will be converted to HTTPS with Cloudflare before going through Nginx to my Ghost public instance.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2021/01/tor.png" class="kg-image" alt srcset="/content/images/size/w600/2021/01/tor.png 600w, /content/images/size/w1000/2021/01/tor.png 1000w, /content/images/2021/01/tor.png 1249w" sizes="(min-width: 720px) 720px"><figcaption>Network diagram to show mechanisms users may interact with this blog.</figcaption></figure><p>Users browsing with Tor pop out inside the server, bypassing the firewall and hit Nginx on port 88. These requests are then routed to the tor instance of Ghost due to the limitation discussed above.</p><figure class="kg-card kg-code-card"><pre><code class="language-nginx">server {
    listen 443 ssl http2;
    server_name www.adammalone.net;
    index index.html index.htm;

    ssl_certificate     /etc/ansible/keys/www.adammalone.net-ec.pem;
    ssl_certificate_key /etc/ansible/keys/www.adammalone.net-ec.key;

    include /etc/nginx/cloudflare-allow.conf;
    deny all;
    location / {
            proxy_pass http://localhost:2112/;
    }
}</code></pre><figcaption>Nginx configuration produced by Ansible for the clearnet blog.</figcaption></figure><p>As discussed above, both instances of Ghost hit the same MySQL database. The below configuration shows how requests to the administration pages are blocked which I think makes the concept of t<em>wo ghosts one db</em> more of a safe one.</p><figure class="kg-card kg-code-card"><pre><code class="language-nginx">server {
    listen 127.0.0.1:88;
    root /var/www/html/tor;
    index index.html index.htm;

    location /ghost { deny all; }
    location / {
            proxy_pass http://localhost:2113/;
    }
}</code></pre><figcaption>Nginx configuration produced by Ansible for the onion blog.</figcaption></figure><blockquote>N.B. Security headers set in Nginx have been removed from these snippets for brevity.</blockquote><h3 id="why-no-ssl-certificate">Why no SSL certificate?</h3><p>Finally, you may have noticed that I've not utilised an SSL certificate for users accessing the site over Tor. After a good deal of research, my opinion has coagulated to the view that over Tor, SSL certificates provide positive identity but no additional security.</p><p>To summarise <a href="https://stackoverflow.com/a/27759746/14957095?ref=adammalone.net">this very good Stack Overflow comment</a>:</p><ul><li>As Tor is already an encrypted protocol, an SSL certificate adds no additional security </li><li>Anyone can generate an <code>.onion</code> hostname although it's cryptographically all but impossible for someone to generate <em>your</em> <code>.onion</code> hostname</li><li>An SSL certificate with EV extension can prove the real identity of the owner of the authenticated hosts</li></ul><p>Because I'm not looking to positively identify myself as the owner of this blog any further than I already have, I'm happy to not go through the additional effort, time, and <a href="https://community.letsencrypt.org/t/letsencrypt-for-onion/10045?ref=adammalone.net">cost</a> of using an unnecessary SSL certificate.</p><h3 id="find-me">Find me</h3><p>Until I strike cryptographic gold with a nice 10-character prefix, you can find me on Tor here: <a href="http://amalone2l6sqxt75shmkrbglepe5uawm4gr5gjk4w7h4l3qsao7iwcqd.onion/?ref=adammalone.net">http://amalone2l6sqxt75shmkrbglepe5uawm4gr5gjk4w7h4l3qsao7iwcqd.onion</a>.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Integrating Nginx and Keycloak without OpenResty ]]></title>
        <description><![CDATA[ Extending on my previous post about creating a custom CA and using client
certificates through Cloudflare
[/post/client-certificates-custom-cas-and-cloudflare/], I wanted to write about
how I integrated Keycloak with Nginx without OpenResty.

As we had a handful of different websites and applications running on the
server, I wanted to simplify everything ]]></description>
        <link>https://www.adammalone.net/integrating-nginx-and-keycloak-without-openresty/</link>
        <guid isPermaLink="false">5fe7b647395f2b6d0841b1ae</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 28 Dec 2020 08:00:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1485550409059-9afb054cada4?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;MXwxMTc3M3wwfDF8c2VhcmNofDJ8fGlkZW50aXR5fGVufDB8fHw&amp;ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>
        <content:encoded><![CDATA[ <p>Extending on my previous post about <a href="/post/client-certificates-custom-cas-and-cloudflare/">creating a custom CA and using client certificates through Cloudflare</a>, I wanted to write about how I integrated Keycloak with Nginx without OpenResty.</p><p>As we had a handful of different websites and applications running on the server, I wanted to simplify everything with the use of an identity platform. This would then be <strong>the</strong> place that usernames and passwords are stored, thus governing authentication to any existing or new property.</p><p>Installing Keycloak itself with a <a href="https://github.com/andrelohmann/ansible-role-keycloak?ref=adammalone.net">supplied Ansible role</a> didn't quite go to plan, mostly due to some minor differences in Ubuntu 20 and Keycloak 12. Eventually I worked my way around the error messages that Ansible kept throwing and created <a href="https://github.com/andrelohmann/ansible-role-keycloak/pull/2?ref=adammalone.net">a pull request</a> so we could share the love back.</p><p>Despite the fact that every single blog post and technical article claimed the <em>only</em> way to complete a Keycloak/Nginx integration was to use <a href="https://openresty.org/en/?ref=adammalone.net">OpenResty</a> (as it combines Nginx with LuaJIT), I didn't want to because I was extremely happy with the <a href="https://github.com/geerlingguy/ansible-role-nginx?ref=adammalone.net">Ansible role I use to manage Nginx</a> and a comparable role wasn't available for OpenResty. Using this role was also the reason I didn't want to have to install Nginx from source.</p><p>As a result, I needed to find a way to install Lua and other dependencies for me to be able to use <code>access_by_lua</code> in my Nginx configuation. Doing this by hand in the first isntance revealed that:</p><ul><li>The versions of Lua and LuaRocks in default Ubuntu apt repos were not recent enough</li><li><a href="https://github.com/openresty/lua-resty-string/issues/66?ref=adammalone.net">Lua-resty-string did not have a recent enough version in the LuaRocks repos</a> which led to <code>undefined symbol: EVP_CIPHER_CTX_init</code> errors</li></ul><p>The correct combination of each for a successful implementation was therefore:</p><ul><li>Lua 5.4.2</li><li>LuaRocks 3.4.0</li><li>libnginx-mod-http-lua</li><li>lua-cjson</li><li>lua-resty-http</li><li>lua-resty-session</li><li>lua-resty-jwt</li><li>lua-resty-openidc</li><li>lua-resty-string</li></ul><p>All of the Lua modules except for lua-resty-string can be installed directly with LuaRocks. Because of the error mentioned above, I installed lua-resty-string from source. Converting this to Ansible and using roles to install <a href="https://github.com/andrewrothstein/ansible-lua?ref=adammalone.net">Lua</a> and <a href="https://github.com/andrewrothstein/ansible-luarocks?ref=adammalone.net">LuaRocks</a> gave me the following example playbook:</p><figure class="kg-card kg-code-card"><pre><code class="language-yml">- hosts: webservers
  vars_files:
    - vars/main.yml
  roles:
    - { role: andrewrothstein.lua }
    - { role: andrewrothstein.luarocks }
    
- name: install libnginx-mod-http-lua
  package:
    name: libnginx-mod-http-lua
    state: present

- name: luarocks install lua-resty-http
  become: yes
  become_user: root
  command: luarocks install lua-resty-http
  args:
    creates: /usr/local/share/lua/5.1/resty/http.lua

- name: luarocks install lua-resty-session
  become: yes
  become_user: root
  command: luarocks install lua-resty-session
  args:
    creates: /usr/local/share/lua/5.1/resty/session.lua

- name: luarocks install lua-resty-jwt
  become: yes
  become_user: root
  command: luarocks install lua-resty-jwt
  args:
    creates: /usr/local/share/lua/5.1/resty/jwt.lua

- name: luarocks install lua-resty-openidc
  become: yes
  become_user: root
  command: luarocks install lua-resty-openidc
  args:
    creates: /usr/local/share/lua/5.1/resty/openidc.lua

- name: luarocks install lua-cjson
  become: yes
  become_user: root
  command: luarocks install lua-cjson
  args:
    creates: /usr/local/share/lua/5.1/cjson

- name: look for resty-string
  become: yes
  stat:
    path: /usr/local/share/lua/5.1/resty/string.lua
  changed_when: False
  register: restystring
- when: not restystring.stat.exists
  block:
    - name: download tgz...
      become: yes
      become_user: root
      get_url:
        url: https://github.com/openresty/lua-resty-string/archive/v0.12.tar.gz
        dest: /tmp/lua-resty-string-0.12
        checksum: sha256:bfd8c4b6c90aa9dcbe047ac798593a41a3f21edcb71904d50d8ac0e8c77d1132
    - name: unarchiving tgz
      become: yes
      become_user: root
      unarchive:
        remote_src: yes
        src: /tmp/lua-resty-string-0.12
        dest: /usr/local/src
    - copy:
        src: "{{ item }}"
        dest: /usr/local/share/lua/5.1/resty/
        owner: root
        group: root
        mode: '0644'
      with_fileglob:
        - /usr/local/src/lua-resty-string-0.12/lib/resty/*
  always:
    - name: cleaning up...
      become: yes
      become_user: root
      with_items:
        - /tmp/lua-resty-string-0.12
        - /usr/local/src/lua-resty-string-0.12
      file:
        path: '{{ item }}'
        state: absent</code></pre><figcaption>Example playbook to integrate Nginx with Keycloak.</figcaption></figure><p>The final step from here was to extend the Nginx configuration from my <a href="/post/client-certificates-custom-cas-and-cloudflare/">previous blog post</a> to use <code>access_by_lua</code>. The <code>set $session_secret</code> line was <a href="https://github.com/bungle/lua-resty-session/issues/23?ref=adammalone.net#issuecomment-171419442">crucially important</a> as without that I kept running into the following error:</p><figure class="kg-card kg-code-card"><pre><code>SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking</code></pre><figcaption>SSL error when not setting a global session_secret.</figcaption></figure><p>The final working Nginx configuration looked like the below (with replacement values for session_secret, client_id and client_secret of course).</p><figure class="kg-card kg-code-card"><pre><code class="language-nginx">server {
    listen 443 ssl http2;
    server_name www.oursite.com;
    index index.html index.htm;
    
    include /etc/nginx/cloudflare-allow.conf;
    deny all;
    ssl_certificate     /etc/letsencrypt/live/oursite.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/oursite.com/privkey.pem;
    ssl_verify_client on;
    ssl_client_certificate /etc/ssl/ca/certs/OurRoot_CA.crt;

    set $session_secret gbbPlrI5jJrWTAHuZpQxAg961dwNSUFB;
    access_by_lua '
      local opts = {
        redirect_uri = "https://www.oursite.com/redirect_uri",
        accept_none_alg = true,
        discovery = "https://keycloak.oursite.com/auth/realms/master/.well-known/openid-configuration",
        client_id = "nginx-oursite",
        client_secret = "client-secret-goes-here",
        ssl_verify = "no",
        redirect_uri_scheme = "https",
        logout_path = "/logout",
        redirect_after_logout_uri = "https://keycloak.oursite.com/auth/realms/master/protocol/openid-connect/logout",
        redirect_after_logout_with_id_token_hint = false,
        session_contents = {id_token=true}
      }
      local res, err = require("resty.openidc").authenticate(opts)

      if err then
        ngx.status = 403
        ngx.say(err)
        ngx.exit(ngx.HTTP_FORBIDDEN)
      end
    ';
    location / {
        proxy_pass http://localhost:3434/;
    }

}</code></pre><figcaption>Nginx configuration using Lua to authenticate sessions with Keycloak.</figcaption></figure><p>Hopefully this helps anyone else wanting to use Ansible to install Nginx and Keycloak!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Client certificates, Let&#x27;s Encrypt, custom CAs and Cloudflare ]]></title>
        <description><![CDATA[ Over the last week, I&#39;ve been building a new server for some friends and I to
host our own NextCloud [https://nextcloud.com/] instance. Part of this is to
keep our technical eyes up-to-date and relevant, with the other being to reduce
some of our reliance on Google ]]></description>
        <link>https://www.adammalone.net/client-certificates-custom-cas-and-cloudflare/</link>
        <guid isPermaLink="false">5fd730d2b63cb658934b486a</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 26 Dec 2020 11:30:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1548200482-b77f76c9dbef?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;MXwxMTc3M3wwfDF8c2VhcmNofDl8fGZpcmV3YWxsfGVufDB8fHw&amp;ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>
        <content:encoded><![CDATA[ <p>Over the last week, I've been building a new server for some friends and I to host our own <a href="https://nextcloud.com/?ref=adammalone.net">NextCloud</a> instance. Part of this is to keep our technical eyes up-to-date and relevant, with the other being to reduce some of our reliance on Google and to own our own data<sup>[<a href="#footnote-1">1</a>]</sup>.</p><p>After using Ansible to set up the server and configure its security and firewalls, I created our Let's Encrypt SSL certificates (<a href="https://github.com/geerlingguy/ansible-role-certbot/issues/12?ref=adammalone.net#issuecomment-340807339">as this step is more easily completed outside Ansible</a>). I then decided to go a step further and create client certificates which would prohibit anyone but the holders of those certificates from accessing the NextCloud instance. This meant that even if users had poor passwords or if there was a zero-day on NextCloud, we had a further line of defence to prevent access.</p><p>Every single forum post I found online said that we couldn't create client certificates with Let's Encrypt as we don't have the root certificate, meaning no signing ability. The only solution that seemed viable from here was to create our own Certificate Authority (CA) and combine origin SSL termination from Let's Encrypt with certificates generated from our own CA.</p><p>I knew the basics, from setting up a CA on my home server, however this time would be a little difference since we were using Certbot to provision our certificates and Cloudflare to manage our DNS/protect our edge.</p><p>Taking a huge number of examples from other Ansible code, and a few trips into StackOverflow, I put together <a href="https://github.com/typhonius/ansible-role-ca?ref=adammalone.net">an Ansible role</a> that would manage our server CA and create client certificates for us. Our server <code>vars/main.yml</code> was then extended to customise variables from this role to create client certificates:</p><figure class="kg-card kg-code-card"><pre><code>ca_passphrase: somethingsecrethere
ca_country_name: AU
ca_organization_name: Oursite
ca_organizational_unit_name: ca.oursite.com
ca_state_or_province_name: NSW
ca_email_address: ca@oursite.com
ca_common_name: Oursite Root CA
ca_requests:
  - name: adam
    email_address: adam@oursite.com
    common_name: oursite.com
    subject_alt_name:
      - DNS:www.oursite.com
      - DNS:wiki.oursite.com
      - DNS:cloud.oursite.com
    country_name: AU
    organization_name: Oursite
    organizational_unit_name: Client Certificate
    passphrase: somethingsecrethere
    cipher: aes256
    
ca_privatekey_path: "{{ ca_private_path }}/OurSite_Root_CA.pem"
ca_csr_path: ca/OurSite_Root_CA.csr
ca_certificate_path: "{{ ca_certs_path }}/OurSite_Root_CA.crt"
ca_root_name: "Oursite_Root"</code></pre><figcaption>Ansible vars/main.xml for creating the CA and single cert/key combination.</figcaption></figure><p>After running Ansible, we had a number of client certificates that I distributed to everyone who used the server. A certificate was created for each user so everyone had their own separate access keys. During the previous step, I also configured Ansible to alter Nginx configuration and make it require client certificates. The resulting Nginx configuration thus included the following lines:</p><figure class="kg-card kg-code-card"><pre><code class="language-nginx">server {
    listen 443 ssl http2;
    server_name oursite.com;
    index index.html index.htm;

    ssl_certificate     /etc/letsencrypt/live/oursite.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/oursite.com/privkey.pem;
    ssl_verify_client on;
    ssl_client_certificate /etc/ssl/ca/certs/OurRoot_CA.crt;
    location / {
        proxy_pass http://localhost:3434/;
    }
}</code></pre><figcaption>Nginx configuration for oursite.com.conf</figcaption></figure><p>Before anyone says anything, I know it's better to use an intermediate certificate rather than the root, but we wanted something quick to start with.</p><p>Unfortunately, regardless of any of the above, we received the following error.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/12/Screen-Shot-2020-12-26-at-7.09.01-PM.png" class="kg-image" alt srcset="/content/images/size/w600/2020/12/Screen-Shot-2020-12-26-at-7.09.01-PM.png 600w, /content/images/2020/12/Screen-Shot-2020-12-26-at-7.09.01-PM.png 951w" sizes="(min-width: 720px) 720px"><figcaption>400 Bad Request error.</figcaption></figure><p>This was definitely an error I'd seen before when I set up another CA for myself and spent hours banging my head against the wall to get the right combination of OpenSSL commands and certificates created. I initially assumed that I'd done something wrong when formulating the Ansible role, however the answer turned out to be more simple than that.</p><p>Because of our set up, any internet traffic that reaches our server origin has to pass through Cloudflare which acts as both a CDN and WAF for us. My hypothesis was that something was happening on the wire between their edge and our origin that meant client certificates weren't getting transmitted with the request.</p><p>Bypassing Cloudflare by hard-coding our server IP in <code>/etc/hosts</code> confirmed this, so it looked like we'd have to can the whole idea of client certificates and instead restrict our access to when we were on our <a href="https://www.wireguard.com/?ref=adammalone.net">Wireguard</a> VPN.</p><p>A brief look through Cloudflare options however gave me a bit of hope as there was reference to client certificates. I was presented with two options when I clicked 'Create client certificate':</p><ul><li>Generate private key and CSR with Cloudflare</li><li>Use my private key and CSR</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/12/cloudflare_client_certificates-1.png" class="kg-image" alt srcset="/content/images/size/w600/2020/12/cloudflare_client_certificates-1.png 600w, /content/images/size/w1000/2020/12/cloudflare_client_certificates-1.png 1000w, /content/images/2020/12/cloudflare_client_certificates-1.png 1054w" sizes="(min-width: 720px) 720px"><figcaption>Cloudflare client certificates.</figcaption></figure><p>Seeing as we'd gone through the effort of creating our own CA, I decided that we'd allow Cloudflare to take our CSRs (helpfully already on the server from our Ansible role), and create and sign some certificates so we could take advantage of restricting based on valid certificate at the edge.</p><p>Once I had the signed certificates from Cloudflare, I needed to go back to our server and create some combined PKCS #12 files that could be loaded into each of our browsers in order to authenticate with the Cloudflare edge. The following command uses the signed certificate from Cloudflare and the private key on the server.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">openssl pkcs12 -export -out adam.p12 -in adam.cert.pem -inkey adam.key.pem</code></pre><figcaption>Openssl command to create a PKCS #12 file from a certificate and key.</figcaption></figure><p>The benefit of this method is that while Cloudflare is able to authenticate our client certificates, they only hold what is publicly available so our private keys are never leaked to a third party. The benefit of this is that even if someone breaks into my Cloudflare account, they would not be able to make or retrieve valid client certificates.</p><p>The next step to this approach is to add a firewall rule that ensures all requests coming in to specific hostnames have a valid certificate. The below rule blocks requests that traverse Cloudflare that are not accompanied by a valid client certificate.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/12/cloudflare_firewall.png" class="kg-image" alt srcset="/content/images/size/w600/2020/12/cloudflare_firewall.png 600w, /content/images/size/w1000/2020/12/cloudflare_firewall.png 1000w, /content/images/2020/12/cloudflare_firewall.png 1031w" sizes="(min-width: 720px) 720px"><figcaption>Cloudflare firewall rules.</figcaption></figure><blockquote>Ah but what about people who are sneaky and use your /etc/hosts method</blockquote><p>The final step to this approach is to deny access to Nginx from anyone outside Cloudflare's IP ranges. As is well documented elsewhere on the internet, I took the <a href="https://www.cloudflare.com/ips/?ref=adammalone.net">Cloudflare IP list</a> and configured Ansible to add the following to my Nginx configuration.</p><figure class="kg-card kg-code-card"><pre><code>server {
    listen 443 ssl http2;
    server_name oursite.com;
    index index.html index.htm;

    include /etc/nginx/cloudflare-allow.conf;
    deny all;
    ssl_certificate     /etc/letsencrypt/live/oursite.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/oursite.com/privkey.pem;
    ssl_verify_client on;
    ssl_client_certificate /etc/ssl/ca/certs/OurRoot_CA.crt;
    location / {
        proxy_pass http://localhost:3434/;
    }
}</code></pre><figcaption>Nginx configuration for oursite.com.conf</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-nginx"># https://www.cloudflare.com/ips
# IPv4
allow 173.245.48.0/20;
allow 103.21.244.0/22;
allow 103.22.200.0/22;
allow 103.31.4.0/22;
allow 141.101.64.0/18;
allow 108.162.192.0/18;
allow 190.93.240.0/20;
allow 188.114.96.0/20;
allow 197.234.240.0/22;
allow 198.41.128.0/17;
allow 162.158.0.0/15;
allow 104.16.0.0/12;
allow 172.64.0.0/13;
allow 131.0.72.0/22;

# IPv6
allow 2400:cb00::/32;
allow 2606:4700::/32;
allow 2803:f800::/32;
allow 2405:b500::/32;
allow 2405:8100::/32;
allow 2a06:98c0::/29;
allow 2c0f:f248::/32;</code></pre><figcaption>/etc/nginx/cloudflare-allow.conf</figcaption></figure><p>This then allows us to get the best of all worlds:</p><ul><li>SSL certificates managed with Certbot and Let's Encrypt</li><li>Our own CA to create client certificates without third party involvement</li><li>Protection at the edge with client certificates</li><li>Locking Nginx requests and responses to those coming from the edge so people can't bypass it</li></ul><p>I'll have another blog post up in the coming weeks about how I then integrated NextCloud, Wiki.js, as well as a number of other custom services with Keycloak acting as an identity provider (IdP) to reduce the number of usernames and password in use.</p><h3 id="footnotes">Footnotes</h3><!--kg-card-begin: markdown--><p>
    <a name="footnote-1">1</a>: We're not stupid enough to try to host our own mail.
</p><!--kg-card-end: markdown--> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ How I got into technology ]]></title>
        <description><![CDATA[ Whether I&#39;m making new friends and industry relationships, or even just
participating in casual conversation, I often get asked how someone with my
educational background ended up in the technology sector.

The simple answer is equal parts passion and procrastination with a sprinkling
of subversion and a pinch ]]></description>
        <link>https://www.adammalone.net/how-i-got-into-technology/</link>
        <guid isPermaLink="false">5f4e33ab60dc9217ebe7bbd2</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 28 Oct 2020 06:12:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1531297484001-80022131f5a1?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>Whether I'm making new friends and industry relationships, or even just participating in casual conversation, I often get asked how someone with my educational background ended up in the technology sector.</p><p>The simple answer is equal parts passion and procrastination with a sprinkling of subversion and a pinch of counter-culture.</p><p>The longer answer is that it all began when I got my first computer.</p><h3 id="the-background">The background</h3><p>Growing up while the internet was becoming prevalent was an interesting time. The graph below shows that the '90s and '00s were when internet use took off in a huge way, and my experience was no different. Learning to type on MSN Messenger, picking my online pseudonym, and being taught (incorrectly) how a search engine worked were all things I distinctly remember doing.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/09/Share-of-internet-users.png" class="kg-image" alt srcset="/content/images/size/w600/2020/09/Share-of-internet-users.png 600w, /content/images/size/w1000/2020/09/Share-of-internet-users.png 1000w, /content/images/size/w1600/2020/09/Share-of-internet-users.png 1600w, /content/images/size/w2400/2020/09/Share-of-internet-users.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Share of the population on the internet (Max Roser, Hannah Ritchie and Esteban Ortiz-Ospina (2015) - "Internet". <em><em>Published online at OurWorldInData.org.</em></em> Retrieved from: 'https://ourworldindata.org/internet' [Online Resource])</figcaption></figure><p>Never before or again will a generation go through formative years at the same time as both the world was becoming more connected and information was becoming more available to everyone. While a lot different now, I think the internet was much more like the Wild West as it was burgeoning. Security wasn't really much of a thing, many small sites and blogs existed in place of larger conglomerate presence, and it felt new and exciting to be a part of something that hadn't yet fully gained widespread adoption.</p><p>That all being said, I never saw much allure in studying I.T. at school, preferring rather to experiment on my own at home on my laptop. This meant idling in IRC, trying out code snippets or programs I found online without much care for the consequences, and trying to work out how to download music for free. A memory I have from the time is that I couldn't for the life of me get torrent files to download. Not knowing that a torrent client was needed, I downloaded 1kb files and furrowed my brow when they didn't play in VLC or iTunes.</p><p>I think my interest in technology was further enhanced by spending infrequent weekends with my Uncle, when he was back from university. We built computers, created LANs to play video games, and constructed elaborate train tracks around his parent's living room (more time was spent using the <a href="https://www.australianmodeller.com.au/products/track-eraser?ref=adammalone.net">track eraser</a> to get a fully conductive system than actually driving the trains). I learned how to crack games, what Warez meant, and of course finally learned how torrent files worked.</p><h3 id="the-set-up">The set-up</h3><p>Fast forward to university and I was happily studying chemistry, a passion I'd picked up from school and increased by my own extra-curricular home experiments. In my third year, I studied abroad, and thus began the chain of events that switched my passions from science to technology.</p><p>Living on campus in residential halls at university, everyone tussled with access to the internet and to media by the metered connection (100MB a day) hard-wired into all rooms, and the exorbitant price of data on cell-phone plans.</p><p>What we did have however was gigabit ethernet between all endpoints on the campus network backed by a connection to <a href="https://www.aarnet.edu.au/?ref=adammalone.net">AARNET</a>.</p><p>Being resourceful university students, someone had set up a <a href="https://en.wikipedia.org/wiki/Direct_Connect_(protocol)?ref=adammalone.net">Direct Connect</a> server and students could access their share of Linux ISOs locally whilst socialising with other students over the intranet. I eventually found my way onto this server and with that started to become a part of the online university community.</p><p>Fast-forward a couple of months and I realised I'd found a friend-group in the most unlikely place. While my parents and teachers had advised me against making friends with people online, I'd instead found a group of kindred spirits – people without lots of deep technical knowledge, but with nous and determination.</p><p>I started to hang out (in-person this time) with the motley group who administrated and moderated this hub, eventually being invited to the moderator group myself and donning the stole of responsibility that came with it (mainly being heavy handed with the kick button).</p><p>I learned that what I had assumed was a deeply technical set of infrastructure, software, and glue code was actually just <a href="https://github.com/blha303/YnHub?ref=adammalone.net">YnHub</a>.exe running on an old Windows laptop in someone's dorm cupboard. As nothing was encrypted and all traffic was likely logged extensively, the laptop was moved incongruously between dorms when someone in the university IT department (we'll hear more about these later) blocked an IP.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/09/CKq9gg9UcAAGccR.png" class="kg-image" alt srcset="/content/images/size/w600/2020/09/CKq9gg9UcAAGccR.png 600w, /content/images/2020/09/CKq9gg9UcAAGccR.png 813w" sizes="(min-width: 720px) 720px"><figcaption>YnHub on Windows.</figcaption></figure><h3 id="the-challenge">The Challenge</h3><p>Eventually, we either ran out of trusted dorm rooms to host the server, or a port was blocked that allowed DC++ to run seamlessly and all was thought lost. Everyone was sent back to their 100MB data caps and with it came a dearth of sharing large files on campus.</p><p>With that came the challenge so neatly and succinctly summarised by the following tweet. Out of the window went university work and through that same window came a lot of learning about technology.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet" data-width="550"><p lang="en" dir="ltr">&quot;The best programs are the ones written when the programmer is supposed to be working on something else.&quot; - Melinda Varian</p>&mdash; Programming Wisdom (@CodeWisdom) <a href="https://twitter.com/CodeWisdom/status/1309470447667421189?ref_src=twsrc%5Etfw&ref=adammalone.net">September 25, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>We eventually came to the conclusion that the next evolution of DC++ would need to be off campus; in a place outside the reach of university IT, yet accessible to those on campus wanting to connect. One of the moderator team provided us with the root password to one of their friend's CentOS servers, and after extensive googling about how to get <em>in</em> to the server, we managed to open up an SSH connection. Further extensive googling provided us with a test command that we executed before shutting the window lest we broke anything.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/09/admalone_-_root_theseus____-_ssh_theseus_-_80-24.png" class="kg-image" alt srcset="/content/images/size/w600/2020/09/admalone_-_root_theseus____-_ssh_theseus_-_80-24.png 600w, /content/images/size/w1000/2020/09/admalone_-_root_theseus____-_ssh_theseus_-_80-24.png 1000w, /content/images/2020/09/admalone_-_root_theseus____-_ssh_theseus_-_80-24.png 1140w" sizes="(min-width: 720px) 720px"><figcaption>The first command I ever ran on Linux.</figcaption></figure><p>Without the comfort zone of a GUI and with limited/no Linux knowledge, we realised that this would be significant challenge for us. Over the next few days, we followed a set of instructions to install <a href="http://opendchub.sourceforge.net/?ref=adammalone.net">OpenDCHub</a> on the server and another set of instructions to open the right port in the firewall. We learned that DNS was the phone book of the internet and registered a new domain so users wouldn't have to remember our IP address and started spreading the word that we were back in business.</p><p>We'd correctly, albeit accidentally, surmised that even though DC++ used a centralised architecture, the point-to-point connections were <em>direct</em>. This meant no round trips from client to server for data transfer, and users were still able to benefit from the ultra-fast campus intranet.</p><h3 id="education-by-procrastination">Education by procrastination</h3><p>Rather than be satisfied with the default server settings, we started a quest to be secure, performant, and user-friendly, by spending many late nights learning all there was to know about Linux so we could effectively administrate this server.</p><p>We also became aware that there was functionality baked into OpenDCHub to run bots written in Perl. We had <a href="https://github.com/typhonius/opendchub/tree/master/odchsrc/Samplescripts?ref=adammalone.net">two example bots</a> that came with the source code, some <a href="https://github.com/typhonius/opendchub/blob/master/odchsrc/Documentation/scriptdoc?ref=adammalone.net">documentation</a>, and a smattering of examples from around the web – although nothing that we could use directly. This of course meant that our next challenge was to learn Perl and write our own. You'd better believe we sent the <a href="https://xkcd.com/208/?ref=adammalone.net">relevant XKCD</a> around when things we working, as well as the other <a href="https://xkcd.com/1171/?ref=adammalone.net">relevant XKCD</a> when things weren't.</p><p>Running through three iterations, our Perl took the form of ChaosBot. The bot managed users, alert messages, chat statistics, chat history, and a host of other functions; including the ability to self-update. In doing this work, I learned more about the benefits of open source and how it aligned to my personal views. So much so that <a href="https://github.com/typhonius/odchbot?ref=adammalone.net">I published the code on GitHub to collaborate</a>.</p><p>After we'd built our ideal bot – one that made up for all the drawbacks of our decision to use OpenDCHub – we decided to expand to the web. Knowing nothing about HTML, our first iteration was a series of flat files hosting text and poorly written CSS. I had no idea that content management systems (CMS) existed so each page was written copied from an existing page and edited in <a href="https://www.vim.org/?ref=adammalone.net">Vim</a>.</p><p>I was informed by a friend that there was an easier way, and started to learn his recommended CMS, Drupal, to build a new site. Over time, we learned Drupal (and by extent PHP), caching strategies with <a href="https://varnish-cache.org/?ref=adammalone.net">Varnish</a>, database management, and web security.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/10/Chaotic_Neutral___Chaotic_Neutral-2.png" class="kg-image" alt srcset="/content/images/size/w600/2020/10/Chaotic_Neutral___Chaotic_Neutral-2.png 600w, /content/images/size/w1000/2020/10/Chaotic_Neutral___Chaotic_Neutral-2.png 1000w, /content/images/size/w1600/2020/10/Chaotic_Neutral___Chaotic_Neutral-2.png 1600w, /content/images/size/w2400/2020/10/Chaotic_Neutral___Chaotic_Neutral-2.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Homepage for the website.</figcaption></figure><p>We created a place where users could blog, access FAQs, and organise social events to take the friendships that we'd all been making online into the offline space. This community grew and grew with an eventual total of over <strong>6000</strong> users.</p><p>This evolved over time to be tightly integrated with the chat server in such a way that users could manage their identity and metadata in one place with that information disseminated to all services; mainly through a combination of glue code, bash scripts, and cron jobs.</p><p>Due also to the scattering of students (and former students) who had moved into residences off campus but still wanted to stay connected, along with the rise in prevalence of mobile phones, we were pressed to build a solution that would allow anyone, anywhere to chat together.</p><p>Thus another bot (WebRelayBot) was born. The quick and hacky way proved to be robust and reliable with the bot spawning a child that watched for chat to be entered online and injecting it both into chat and into the database.</p><p>Looking back at the code from that time that I've got backed up, it seems we spent a lot of time understanding the pitfalls of parent and child processes; specifically because we needed to run it in a continuous while loop to listen for new chat messages.</p><p>The code snippet below from 2010 shows how we worked out how to spawn a child that would operate independently of the standard processes rather than acting upon the provided (and infrequent) server triggers. For a long time, the bot would become unresponsive very quickly after instantiation and it was only with the use of tools like <a href="https://www.valgrind.org/?ref=adammalone.net">Valgrind</a> and <a href="https://perldoc.perl.org/Data::Dumper?ref=adammalone.net">Data::Dumper</a> that we worked out it was becoming a zombie and needed some special love to not be reaped.</p><figure class="kg-card kg-code-card"><pre><code class="language-perl">## VERY IMPORTANT: MUST HAVE $SIG{CHLD} SET TO IGNORE
## if it isn't then the bot will keep making forks and trying to keep
## up with the zombie processes. By not reaping the children it ignores 
## them and they don't become zombies.
$SIG{CHLD} = 'IGNORE';
sub main()
{
    &amp;child();
}
sub child() {
    ## We need this or odch thinks the script is dead.
    $pid=fork(); 
    if($pid == 0) {
        ## Consider using a trigger such as Inotify2 instead of while.
        while(1) {
            open(LOGFILE) or die("Could not open log file.");
            ## In case the server dies and there is an accumulation of lines written from the web they'll all get pasted into chat when it lives again.
            foreach $line (&lt;LOGFILE&gt;) {
                chomp($line);
                &amp;log_and_send_data($line);
            }
            close LOGFILE;
            &amp;empty();
            select(undef, undef, undef, 0.01);
        }
    }
    pid_log();
}</code></pre><figcaption>Learning parents and children with WebRelayBot.</figcaption></figure><p>Not being content with polling on the website every few seconds for updates, we started to learn this new thing called Node.js which promised near instantaneous updates online.</p><p>Whilst none of us knew JavaScript, this really only needed the most moderate of code to be written. So we spent some more evenings testing a lot of different permutations, until finally it worked as we wanted. Using tools like <a href="https://www.wireshark.org/?ref=adammalone.net">Wireshark</a>, <a href="https://en.wikipedia.org/wiki/Netcat?ref=adammalone.net">netcat</a> and <a href="https://nmap.org/?ref=adammalone.net">nmap</a> to make sure our sockets were working and data was being passed back and forth correctly was probably over the top for the goal. The knowledge it bestowed me with however, likely acted as a contributory catalyst for later on when I was doing this <em>for real</em>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/10/Chaotic_Neutral_Live_Chat___Chaotic_Neutral.png" class="kg-image" alt srcset="/content/images/size/w600/2020/10/Chaotic_Neutral_Live_Chat___Chaotic_Neutral.png 600w, /content/images/size/w1000/2020/10/Chaotic_Neutral_Live_Chat___Chaotic_Neutral.png 1000w, /content/images/size/w1600/2020/10/Chaotic_Neutral_Live_Chat___Chaotic_Neutral.png 1600w, /content/images/size/w2400/2020/10/Chaotic_Neutral_Live_Chat___Chaotic_Neutral.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Live chat integrating OpenDCHub with Node.js.</figcaption></figure><p>Over the next few months, we drank lots of coffee, stayed up late, scoured the internet for examples, and gained a ton of experience in everything that we needed to build the platform that we dreamed of and that all our user's wanted.</p><p>We ended up with a beautiful plate of spaghetti. Our systems and tools were "integrated" together, albeit with files, periodic rsyncs, and some Blu Tack. We had a backup<em>-ish</em> strategy, and an <a href="https://www.msp360.com/resources/blog/rto-vs-rpo-difference/?ref=adammalone.net">RTO</a> of a couple of days, but more importantly it was something we'd built from scratch, and it worked really, really well.</p><p>During our time hacking scripts into code, code into modules, and modules into packages, I genuinely believe I learned more than I did during my actual university course – the one I was meant to be spending my time on.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/10/CN-Bot-Architecture-1.png" class="kg-image" alt srcset="/content/images/size/w600/2020/10/CN-Bot-Architecture-1.png 600w, /content/images/size/w1000/2020/10/CN-Bot-Architecture-1.png 1000w, /content/images/2020/10/CN-Bot-Architecture-1.png 1364w" sizes="(min-width: 720px) 720px"><figcaption>Flow diagram for the website, the server, and the bots.</figcaption></figure><p>The last challenge related to usability was to introduce a function to temporarily gag unruly users. While we had functions to kick and ban users, sometimes a better solution was simply to prevent them from speaking for a few minutes. Unfortunately, this wasn't something that we could do with a bot as by the time it received triggers, user messages had already been sent out to all those subscribed to the channel.</p><p>This meant we had to learn C in order to alter the functionality of OpenDCHub itself. Taking lots of cues from how the ban function worked, we copied and pasted as needed to create a gaglist and allow authorised users to manage it. Our next step was to recompile the software and deploy it with our patch.</p><p>Otherwise harmless users who felt the need to spam were henceforth taken care of by allowing them to view chat, send PMs, and make use of bot commands; however they were temporarily prevented from speaking in main chat and annoying other users.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/10/____ODCH_-_Welcome_to_the_chat_server___127_0_0_1_____EiskaltDC__-2.png" class="kg-image" alt srcset="/content/images/size/w600/2020/10/____ODCH_-_Welcome_to_the_chat_server___127_0_0_1_____EiskaltDC__-2.png 600w, /content/images/size/w1000/2020/10/____ODCH_-_Welcome_to_the_chat_server___127_0_0_1_____EiskaltDC__-2.png 1000w, /content/images/2020/10/____ODCH_-_Welcome_to_the_chat_server___127_0_0_1_____EiskaltDC__-2.png 1358w" sizes="(min-width: 720px) 720px"><figcaption>Example of gag functionality on OpenDCHub via a patch we wrote.</figcaption></figure><p>While everything inside the chat server was working well, university IT made changes that caused connections to our external server to be blocked. We weren't sure if this was on purpose or a consequence of other actions, but it gave us another challenge to get round in order to continue the service.</p><p>It didn't look like an IP block, as SSH connections were unhindered and we could still access the website. This led us to believe it was more likely to be either port blocking or packet sniffing. From here, we started learning encryption and SSL tunnelling as research indicated this to be a possible method of preventing our packets being sniffed.</p><p>Getting thousands of individual users to learn how to tunnel was likely not going to be a successful endeavour, especially since it took us a long time to learn how to do it ourselves. What we eventually landed on as a workable solution was <a href="https://www.stunnel.org/?ref=adammalone.net">Stunnel</a>. A program that we could both install on the server and provide as a preconfigured package for users to install so it would work out of the box.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/10/CN-Network-Diagram.png" class="kg-image" alt srcset="/content/images/size/w600/2020/10/CN-Network-Diagram.png 600w, /content/images/size/w1000/2020/10/CN-Network-Diagram.png 1000w, /content/images/2020/10/CN-Network-Diagram.png 1100w" sizes="(min-width: 720px) 720px"><figcaption>Network diagram of the server and how users connect.</figcaption></figure><p>The network diagram above from our internal documentation and shows how we effectively blocked direct access to campus, requiring all residential college IP ranges to use Stunnel. The problem with Stunnel is that it identified all users as coming from localhost (127.0.0.1), since that's where users were proxying through. We wanted the ability to identify external users so required they connect directly through a different port. Learning IPTables made this possible since we could block ports based on where users were coming from.</p><p>Our next challenge was learning how to package preconfigured clients for Windows, Mac, and Linux so users wouldn't need to spend time and effort learning to configure a client and Stunnel by themselves. Our hypothesis was that by making everything install and connect with one click, we'd be more accessible to less technical users.</p><p>So with that, we set about packaging software and configuration into portable packages that could update with each new release of the underlying software – or if we needed to switch DNS/port due to the continual attempts at keeping the service running in the face of potential shut-down. The installers (and uninstallers) that we wrote married the advertised functionality with our trademark levity. The Mac installer for example starts <a href="https://ss64.com/osx/say.html?ref=adammalone.net">talking to the user</a> as it progresses through its assigned tasks.</p><p>This did eventually save our bacon when our shoestring budget operation forgot to update the free domain name we were using. The <a href="https://www.oracle.com/corporate/acquisitions/dyn/?ref=adammalone.net">DynDNS</a> service required us to log in once a month and click a button to keep the domain assigned to our IP or it would be placed back into their pool. Unfortunately, the specific tld that we used was moved onto the paid plan, so our domain was lost forever. Quickly rolling out new preconfigured software meant users were switched to a new domain with effectively no downtime.</p><p>The whole operation cost no money and earned us no money, but what it did provide us with was a phenomenal experience to meet new people, the ability to learn a lot of interesting code/technology, and the satisfaction of running a service that was firmly in the greyest of areas.</p><p>As a final hurrah to say goodbye to the university and to claim our victory in the cat and mouse game against the university IT department, the admin team went on an audacious outing to their office where we had our photo taken with out team hoodies on.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/10/chaotic-neutral.png" class="kg-image" alt srcset="/content/images/size/w600/2020/10/chaotic-neutral.png 600w, /content/images/size/w1000/2020/10/chaotic-neutral.png 1000w, /content/images/size/w1600/2020/10/chaotic-neutral.png 1600w, /content/images/size/w2400/2020/10/chaotic-neutral.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Team hoodies and pseudonyms = ultimate nerds.</figcaption></figure><h3 id="the-choice">The choice</h3><p>After completing my studies in the UK, I returned to Australia and began to hunt for a place of employment. I had a background and degree in chemistry, but a simultaneous passion for technology that had grown organically (when I was supposed to be studying) through the creation and management of this service with friends.</p><p>Prior to returning to Australia, I had submitted about 15-20 applications for graduate jobs within my field. Unfortunately, I received mostly the same boiler-plate responses from each organisation, which led me to believe that the <a href="https://immi.homeaffairs.gov.au/visas/getting-a-visa/visa-listing/work-holiday-417?ref=adammalone.net">working holiday visa</a> I was on at the time was not sufficient for them to employ me.</p><p>I sought out other roles in my area and decided on a whim to apply for a developer role at a place that hadn't really said they were hiring. This entailed me walking into their office, CV and basic GitHub portfolio in hand, with an <a href="https://www.youtube.com/watch?v=7tOkpntQtBM&ref=adammalone.net">Oliver-esque</a> request to be employed – something I'd assumed would never work.</p><p>The other role that I interviewed for was as a QA chemist at a pharmaceutical company. Make the chemicals over and over again and test them to make sure they're still the same.</p><p>It was at this point, with two offers in hand, that I made the decision on which direction my career should go. Do I do the thing that I've been spending the last four years studying for, or do I do the thing that I find fun?</p><p>I think it's obvious which path I took. </p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Build, test, and deploy PHP applications with GitHub Actions ]]></title>
        <description><![CDATA[ I learned about GitHub Actions when it was released last year, although didn&#39;t investigate further as I&#39;d already configured my hobby project CI/CD pipelines using Travis CI to a satisfactory standard.

While everything generally worked really well, I found myself running up against some obscure ]]></description>
        <link>https://www.adammalone.net/build-test-deploy-php-applications-with-github-actions/</link>
        <guid isPermaLink="false">5f815e749b68ea55e3c8c7c4</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 12 Oct 2020 05:20:08 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1556075798-4825dfaaf498?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>I learned about <a href="https://github.com/features/actions?ref=adammalone.net">GitHub Actions</a> when it was released last year, although didn't investigate further as I'd already configured my hobby project CI/CD pipelines using <a href="https://travis-ci.org/?ref=adammalone.net">Travis CI</a> to a satisfactory standard.</p><p>While everything generally worked really well, I found myself running up against some obscure issue with Travis that meant despite tests completing successfully, the pull request in GitHub would be left with an amber status and a warning about merging untested PRs.</p><p>Fast forward to about a month ago, and I thought I'd give it another try as I was running through enhancements to some of the <a href="https://github.com/typhonius/acquia-php-sdk-v2?ref=adammalone.net">open-source</a> <a href="https://github.com/typhonius/acquia_cli?ref=adammalone.net">projects</a> I <a href="https://github.com/typhonius/acquia-logstream/?ref=adammalone.net">maintain</a>. The suite of tools I built was in response to Acquia's own tooling at the time being unmaintained and using <a href="https://github.com/acquia/acquia-sdk-php/issues/54?ref=adammalone.net">old (and insecure)</a> versions of libraries to power them.</p><p>It was also an opportunity for me to practice writing object oriented PHP, Symfony console commands, and test driven development in the <em>real world </em>– especially as my career had moved away from hands-on development a few years ago. Even though I work in less technical roles, I still think it's important to keep my eye in, and as I'll explain in a forthcoming blog about how I got into technology, I tend to learn best when I'm supposed to be working on something else.</p><p>For my tools, I decided to abstract the underlying API SDK, rather than package everything into a single library/client tool. I did this so others could consume it and build tools more befitting their needs. I also abstracted log streaming functionality as a one off tool since it solved a very specific purpose. The end-user CLI tool pulls in both as dependencies so we end up with a basic dependency graph as documented below.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.adammalone.net/content/images/2020/10/package-dependencies-1.png" class="kg-image" alt loading="lazy" width="566" height="289"><figcaption>Basic dependency graph for PHP packages.</figcaption></figure><p>Because each of these exists as its own package, I needed three sets of CI/CD scripts so they could be enhanced and tested in parallel. In keeping with providing the best possible experience for users of these tools, I learned how to create packaged <a href="https://en.wikipedia.org/wiki/PHAR_(file_format)?ref=adammalone.net">phar applications</a> of the CLI tool and the logstream tool.</p><p>Phar applications allow all functionality and upstream dependencies that would typically span hundreds or thousands of PHP files to be pulled together in a single binary. They also omit development/test dependencies and do not require technical knowledge to download and use.</p><p>My task was therefore to convert three <code>.travis.yml</code> files into whatever GitHub Actions requires to maintain the same functionality.</p><ul><li><a href="https://github.com/typhonius/acquia-php-sdk-v2/blob/9432509cc423e5fb05ac31e7cc090ea2e928e654/.travis.yml?ref=adammalone.net">Acquia PHP SDK .travis.yml</a></li><li><a href="https://github.com/typhonius/acquia_cli/blob/e6c2ff84b305a3eb9ee7350a6a9e08a610ce7743/.travis.yml?ref=adammalone.net">Acquia Cli .travis.yml</a></li><li><a href="https://github.com/typhonius/acquia-logstream/blob/a9ae771cc0c1b180493c83537264fde0da2a5d73/.travis.yml?ref=adammalone.net">Acquia Logstream .travis.yml</a></li></ul><h3 id="getting-started">Getting started</h3><p>I figured that I should start off with the simplest package I had, the PHP SDK. In spite of the fact that the underlying functionality and how its tested took (me) a long time to construct, the end result was an SDK that was not only fully tested, but simple to build and use. It doesn't compile into a phar by itself so CI/CD would be a carbon copy of how it's tested locally.</p><p>To make things simple, I use composer to lock developer dependencies and composer scripts to manage testing. This means that what I use locally will be exactly the same as what other people download and what will get tested.</p><p>A call to <code>composer test</code> runs:</p><ul><li>PHP linting to make sure we don't have syntax errors</li><li>Unit testing with phpunit to ensure that expected inputs and outputs are nominal</li><li>Code sniffs to keep the codebase at a defined quality and standard (<a href="https://www.php-fig.org/psr/psr-12/?ref=adammalone.net">PSR-12</a>)</li><li>Static analysis to detect errors in type and usage etc</li></ul><p>The <code>.travis.yml</code> manifest pulls in dependencies, runs the above tests, triggers <a href="https://coveralls.io/github/typhonius/acquia-php-sdk-v2?ref=adammalone.net">coveralls to check code coverage</a>, and then, for tagged releases of the CLI and log stream tool, uploads the compiled phar over to GitHub to be <a href="https://github.com/typhonius/acquia_cli/releases/tag/2.0.9?ref=adammalone.net">linked to the release</a>.</p><p>I started off with the default <code>php.yml</code> file <a href="https://github.com/actions/starter-workflows/blob/master/ci/php.yml?ref=adammalone.net">provided by GitHub</a> and placed it in the <code>.github/workflows</code> directory. From there, I started to customise each of the steps to align to how my <code>.travis.yml</code> files were set up.</p><p>This meant changing the <code>composer install</code> line to being <code>composer install --prefer-source --no-progress --no-suggest --no-interaction</code> and creating a build matrix to test installation on both different operating systems and with the two supported versions of PHP: 7.3, and 7.4.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">jobs:
  run:
    runs-on: ${{ matrix.operating-system }}
    strategy:
      matrix:
        operating-system: [ubuntu-latest, macos-latest, windows-latest]
        php-versions: ['7.3', '7.4']</code></pre><figcaption>Standard matrix for installing on Ubuntu, Mac, and Windows with PHP 7.3 and 7.4.</figcaption></figure><p>I found during the port that there already existed a number of really useful GitHub Actions that I could pull in to run the additional tasks that I needed – predominantly creation of releases and upload of artefacts.</p><p>I came to the conclusion during experimentation that I needed to split my manifests into two separate workflows:</p><ul><li>Build/test</li><li>Deploy</li></ul><p>The reason for this was that a workflow, as well as the jobs and steps inside it, gets triggered based on the <code>on</code> key at the top of the workflow file.</p><p>I wanted to run build/test on every single pull request and push to the master branch so it would be triggered when enhancements are being requested and eventually merged in.  </p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]</code></pre><figcaption>Matching all pushes and pull requests to master.</figcaption></figure><p>For deployments, I wanted to only create a release and upload a phar of the CLI and log stream tools when I had tagged a release. As this wouldn't occur on a pull request, I've limited it to pushes only. I also restricted the workflow to run when the pattern of the tag committed matched a semantic version. There's a <a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions?ref=adammalone.net#filter-pattern-cheat-sheet">handy cheat sheet</a> on pattern matching within the yaml files that I used to match the tags.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">on:
  push:
    tags:
      - '[0-9]+.[0-9]+.[0-9]+'</code></pre><figcaption>Running a workflow based on semantic version tags.</figcaption></figure><p>I initially tried to bundle the <code>branches</code> and <code>tags</code> parameters in the same workflow file, but found that since GitHub was doing an OR match, I ended up creating releases and deploying on every pull request and push. The result of this mistake can be <a href="https://github.com/typhonius/acquia-logstream/releases?ref=adammalone.net">observed in my releases</a> which I've kept for posterity.</p><p>By splitting into two workflows, I could also test against the full matrix of OS and PHP version, but deploy quickly and simply with one version.</p><p>One enhancement over Travis that I did find was to make the created phar file and HTML-format code coverage report available for each individual pull request. This meant that for each change requested, the application could be downloaded and tested and a full code coverage report could be reviewed to ensure that new classes and methods were tested. This was all made possible with the <a href="https://github.com/actions/upload-artifact?ref=adammalone.net">upload-artefact</a> action, which took a lot of the pain away from uploading files generated to in the workflow to <a href="https://github.com/typhonius/acquia_cli/actions/?ref=adammalone.net">the output</a>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.adammalone.net/content/images/2020/10/Moves_pcov_removal_to_a_more_appropriate_place__-_typhonius_acquia_cli_b356055.png" class="kg-image" alt loading="lazy" width="2000" height="1049" srcset="https://www.adammalone.net/content/images/size/w600/2020/10/Moves_pcov_removal_to_a_more_appropriate_place__-_typhonius_acquia_cli_b356055.png 600w, https://www.adammalone.net/content/images/size/w1000/2020/10/Moves_pcov_removal_to_a_more_appropriate_place__-_typhonius_acquia_cli_b356055.png 1000w, https://www.adammalone.net/content/images/size/w1600/2020/10/Moves_pcov_removal_to_a_more_appropriate_place__-_typhonius_acquia_cli_b356055.png 1600w, https://www.adammalone.net/content/images/size/w2400/2020/10/Moves_pcov_removal_to_a_more_appropriate_place__-_typhonius_acquia_cli_b356055.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>All the artefacts uploaded at the end of a successful test run.</figcaption></figure><p>The final key to integration made use of the <a href="https://github.com/actions/create-release?ref=adammalone.net">create-release</a> and <a href="https://github.com/actions/upload-release-asset?ref=adammalone.net">upload-release-artefact</a> actions. The following code block shows how simple it was to both create the release and then upload the artefact to it. As this workflow only runs on Linux/PHP 7.3, I don't have to contend with the build matrix challenges above, only creating the release and uploading the artefact once. The deploy workflow only runs on a tagged commit, so runs much more infrequently compared to the build and test workflow.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">    - name: Create Release
      id: create_release
      uses: actions/create-release@v1
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      with:
        tag_name: ${{ github.ref }}
        release_name: ${{ github.ref }}
        draft: false
        prerelease: false

    - name: Upload Release Asset
      id: upload-release-asset
      uses: actions/upload-release-asset@v1
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      with:
        upload_url: ${{ steps.create_release.outputs.upload_url }}
        asset_path: ./acquiacli.phar
        asset_name: acquiacli.phar
        asset_content_type: application/octet-stream</code></pre><figcaption>Creating a GitHub release and attaching an artefact to it.</figcaption></figure><p>The end result can be seen as a result of <a href="https://github.com/typhonius/acquia-logstream/releases/tag/0.0.8?ref=adammalone.net">creating a new release tag and pushing to GitHub</a>. I currently manually enter all of the changes using a custom git command but perhaps in future I'll find a way to make that automated too.</p><h3 id="challenges">Challenges</h3><p>The first challenge I faced was in keeping code coverage functionality. I had previously used Coveralls by creating a <code>clover.xml</code> coverage file with Xdebug. Unfortunately, even though a <a href="https://github.com/marketplace/actions/coveralls-github-action?ref=adammalone.net">Coveralls action exists</a>, it expected code coverage in LCOV format. I spent a while tinkering with this, but ended up putting in the <em>too hard box</em> since I couldn't find the right combination of PHP, phpunit, and LCOV to make it all work.</p><p>I also clobbered release artefacts for a while because when using the upload-artefact action, any files with the same name overwrite previous uploads of the same name. Since my build matrix tested 8 iterations, I was overwriting 7 times with no certainty about what would be left over afterwards.</p><p>Windows support was another issue that plagued me for a while, as I don't have a Windows machine to test on locally. This meant that I'd fire off a change to the manifest and see what was reported back before making another small change and going from there. The main issues for me were that Windows didn't know how to handle <code>/usr/bin/env php</code> in the <a href="https://en.wikipedia.org/wiki/Shebang_(Unix)?ref=adammalone.net">shebang</a>, PowerShell doesn't use the same commands (or syntax) as Unix, and of course the ubiquitous issue of line endings.</p><p>Every single one of my test cases errored out with the following. This is the result of Windows line endings being CRLF instead of LF used on Linux and Mac. <a href="https://www.php-fig.org/psr/psr-12/?ref=adammalone.net#22-files">As PSR-12 requires LF</a>, errors were raised for each of the test files.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">FILE: D:\a\acquia_cli\acquia_cli\src\Cli\AcquiaCli.php
----------------------------------------------------------------------
FOUND 1 ERROR AFFECTING 1 LINE
----------------------------------------------------------------------
 1 | ERROR | [x] End of line character is invalid; expected "\n" but
   |       |     found "\r\n"
----------------------------------------------------------------------
PHPCBF CAN FIX THE 1 MARKED SNIFF VIOLATIONS AUTOMATICALLY
----------------------------------------------------------------------</code></pre><figcaption>Phpcs error seen on Windows due to line endings.</figcaption></figure><p>Additionally, Windows PowerShell doesn't recognise common Unix commands such as <code>find</code>, so my code linting is currently getting ignored on the Windows runners.</p><h3 id="solutions">Solutions</h3><p>To allow the continued check of code coverage, I switched over to the <a href="https://github.com/krakjoe/pcov?ref=adammalone.net">pcov library</a> which can be used in place of Xdebug. This was mainly automatic with the following two entries in the workflow file.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">    - name: Setup PHP with pecl extension
      uses: shivammathur/setup-php@v2
      with:
        php-version: ${{ matrix.php-versions }}
        tools: pecl
        extensions: pcov
        
    - name: Setup pcov
      run: |
        composer require pcov/clobber
        vendor/bin/pcov clobber</code></pre><figcaption>Additions to check code coverage with pcov</figcaption></figure><p>The pcov library has a requirement on the <a href="https://pecl.php.net/package/pcov?ref=adammalone.net">php-pcov extension</a> which can be installed with pecl. As this is a bit arcane for everyday users, I haven't included it in <code>composer.json</code> as a dependency for any of the packages. Instead, I include a step to install the extension and then require pcov/clobber prior to testing.</p><p>Without this, phpunit tests will still run as normal, however code coverage won't be reported to the user which I think is a good middle ground between checking code coverage and making the packages simple to extend without altering PHP installs.</p><p>I also altered my <code>phpunit.xml</code> to output in a few different formats as shown below. The formats allow code coverage to be reported to inline in the test log after phpunit finishes running, but also as a downloadable html file. I've documented below how to upload the code coverage file for each run as an artefact.</p><figure class="kg-card kg-code-card"><pre><code class="language-xml">&lt;logging&gt;
    &lt;log type="coverage-text" target="php://stdout" showUncoveredFiles="true"/&gt;
    &lt;log type="coverage-clover" target="tests/logs/clover.xml" showUncoveredFiles="true"/&gt;
    &lt;log type="coverage-html" target="tests/logs/phpunit.html" lowUpperBound="35" highLowerBound="70"/&gt;
&lt;/logging&gt;</code></pre><figcaption>Logging section of phpunit.xml</figcaption></figure><p>I realised that I could prevent test runs from overwriting output artefacts by altering the name based on the build matrix to upload multiple versions. Within my test manifest I have specified the name using matrix variables so they get uploaded as different artefacts.</p><p>I've done this for both the compiled phar file as well as the code coverage output which results in a downloadable zip containing a simple, easily navigable HTML site to show tested classes and methods and where more attention to unit testing may be required.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">    - name: Upload artefact
      uses: actions/upload-artifact@v2
      with:
        name: ${{ runner.os }}-php-${{ matrix.php-versions }}-acquiacli.phar
        path: acquiacli.phar
        if-no-files-found: error

    - name: Upload code coverage
      uses: actions/upload-artifact@v2
      with:
        name: ${{ runner.os }}-php-${{ matrix.php-versions }}-phpunit.html
        path: ./tests/logs/phpunit.html</code></pre><figcaption>Upload artefacts and code coverage to the test.</figcaption></figure><p>As for Windows, I put this off for a while after trying a few different solutions but without much success. The first issue I faced was in line endings. Initially, I tried to write a PowerShell command to convert each file to Unix file endings as I read on a forum somewhere that <code>unix2dos.exe</code> was included in Cygwin. I also had to add the <code>&amp;</code> as apparently Windows is finicky about a quotation mark appearing first within the braces.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">- name: Convert Windows CRLF to LF.
      if: runner.os == 'Windows'
      run: |
        dir ".\src" -recurse -include *.php | %{ &amp; "C:\Program Files\unix2dos.exe" $_.FullName}
        dir ".\tests" -recurse -include *.php | %{ &amp; "C:\Program Files\unix2dos.exe" $_.FullName}</code></pre><figcaption>Attempt (failed) at converting line endings using unix2dos.</figcaption></figure><p>This didn't work as unix2dos was not installed on GitHub Actions – at least in that location. I did initially research how to add the binary but recognised how doing that would have been tantamount to going down the <a href="https://aliceinwonderland.fandom.com/wiki/Rabbit_Hole?ref=adammalone.net">Rabbit Hole</a>.</p><p>Next, I tried a couple of quick perl one-liners to remove the Windows carriage return (<code>\r</code>) from each of the files.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">    - name: Convert Windows CRLF to LF.
      if: runner.os == 'Windows'
      run: |
        dir ".\src" -recurse -include *.php | %{ &amp; perl -i -p -e "s/\r//" $_.FullName}
        dir ".\tests" -recurse -include *.php | %{ &amp; perl -i -p -e "s/\r//" $_.FullName}</code></pre><figcaption>Attempt (failed) at converting line endings using Perl.</figcaption></figure><p>This attempt didn't error out like the unix2dos attempt did, but I was still receiving the same phpcs errors as before so I decided to look for an alternate solution.</p><p>My answer came in the form of a <a href="https://github.com/actions/checkout/issues/135?ref=adammalone.net">GitHub issue raised against the checkout action itself</a>. I opted to create a <code>.gitattributes</code> file within the repository with a single line to match all cloned files and force them to use LF line endings.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">* text=auto eol=lf</code></pre><figcaption>.gitattributes for consistent line endings across different platforms.</figcaption></figure><p>This may have downstream implications for Windows users wishing to enhance the code, however I'm expecting that most Windows users will download the compiled artefact where this won't be an issue. If it does arise again, I'll switch over to the alternate method which will only impact line endings on CI.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">    steps:
      - name: Set git to use LF
        run: |
          git config --global core.autocrlf false
          git config --global core.eol lf</code></pre><figcaption>Alternate method for changing line endings on CI.</figcaption></figure><p>To solve the Shebang issue that was impacting the build of the phar, I changed the invocation in <code>composer.json</code> to <code>php tools/box compile</code>. When I ran the script directly with <code>./tools/box compose</code>, the Shebang couldn't find <code>/usr/bin/env</code> on Windows, however by calling php directly, Windows was able to ignore the Shebang.</p><h3 id="conclusion">Conclusion</h3><p>Overall I'm super happy with the end result and have moved all of the tools over to GitHub Actions to benefit from the tighter integration, the community supported actions, and the ability to easily manage releases and artefacts as part of CI/CD. Hopefully the challenges and solutions I've documented above can aid others who are looking to convert similar packages.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Delivering with empathy ]]></title>
        <description><![CDATA[ I remember learning about the difference between sympathy and empathy in middle
school. 

&gt; Sympathy: feelings of pity and sorrow for someone else&#39;s misfortune.
Empathy: the ability to understand and share the feelings of another.
Pretty damn simple to understand the difference when you read each of the ]]></description>
        <link>https://www.adammalone.net/delivering-with-empathy/</link>
        <guid isPermaLink="false">5f45f3a501f2c46da78cd8f2</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 08 Sep 2020 04:30:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1451471016731-e963a8588be8?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>I remember learning about the difference between sympathy and empathy in middle school. </p><blockquote>Sympathy: feelings of pity and sorrow for someone else's misfortune<em>.</em><br>Empathy: the ability to understand and share the feelings of another.</blockquote><p>Pretty damn simple to understand the difference when you read each of the above definitions, but harder to apply in reality. Even if we're taught adages like:</p><blockquote>Do unto others as you would have done to yourself.</blockquote><h3 id="why-does-empathy-matter">Why does empathy matter?</h3><p>Despite what may have been commonplace thought previously, humans are not robots. The traditional model of supplier and client, where one is essentially employed at the leisure of the other (and at their full disposal) doesn't really align with empathetic delivery because it places a hierarchy on the relationship.</p><p>In my view, regardless of whether they're personal relationships between friends, or business relationships between organisations, each should be equal and mutually beneficial to both parties. A one-sided relationship in either context leads to unhappiness.</p><p>A great, albeit comedic example, of an attempt at creating an uneven business relationship is from one of my favourite TV shows of all time, <em>Peep Show</em>. Jeremy attempts to order a tradesperson to do a job, because he's the boss and the tradesperson is the worker. </p><figure class="kg-card kg-embed-card"><iframe width="612" height="344" src="https://www.youtube.com/embed/DOvfxSz6q8o?start=28&feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>While this looks ridiculous, I <strong>have</strong> had a client use similar language on me and even though I didn't kick any doors down like the tradesperson, it definitely stayed with me as an example of poor empathy<sup>[<a href="#footnote-1">1</a>]</sup>.</p><h3 id="what-does-delivery-look-like-without-empathy">What does delivery look like without empathy?</h3><p>I feel like most people who've had a job have come across a lack of empathy in the workplace in some way, shape, or form. You don't have to search for long before finding either a <a href="https://tvtropes.org/pmwiki/pmwiki.php/Main/DrJerk?ref=adammalone.net">common</a> <a href="https://tvtropes.org/pmwiki/pmwiki.php/Main/MeanBoss?ref=adammalone.net">trope</a> of <a href="https://tvtropes.org/pmwiki/pmwiki.php/Main/PointyHairedBoss?ref=adammalone.net">workplace</a> <a href="https://tvtropes.org/pmwiki/pmwiki.php/Main/DaChief?ref=adammalone.net">jerks</a>, or stories from friends who <em>hate their boss/client</em>.</p><p>Delivery is a subset of these general instances and potentially complicated further due to the inherent inclusion of paying a third party for defined services. It can be hard sometimes to properly explain the value provided by experts providing services in specific fields. </p><p>As a delivery team, especially in consulting, you're often brought in to deliver a project that a business can't do by itself. The organisation reaps the benefit of the years of collective experience the team has in order to produce high quality work quickly, and effectively.</p><p>In some cases though, delivery teams are seen as expensive equivalents of existing staff, which is of course a false equivalence.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet" data-width="550"><p lang="en" dir="ltr">If I do a job in 30 minutes it’s because I spent 10 years learning how to do that in 30 minutes. You owe me for the years, not the minutes.</p>&mdash; Radical Environmentalist (@davygreenberg) <a href="https://twitter.com/davygreenberg/status/1096304800474361856?ref_src=twsrc%5Etfw&ref=adammalone.net">February 15, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>Without empathy, any project with those preconceived notions becomes a bit like a <a href="https://harrypotter.fandom.com/wiki/Dementor?ref=adammalone.net">Dementor</a>, insofar as it sucks the fun out of everything. The thing that linked fun and delivery and got me thinking about this topic was a discussion I had on my <a href="/post/personal-resets/">Coast Track hike</a> about scaling/measuring <em>fun</em>.</p><p>I learned about <a href="https://www.rei.com/blog/climb/fun-scale?ref=adammalone.net">REI's fun scale</a> and we quickly agreed that we've experienced all types of fun while out in the wilderness, with the most common being I and II<sup>[<a href="#footnote-2">2</a>]</sup>. I got to thinking that the same fun scale can be applied at work too, with type I and II being where you want to be for the majority of the time, but type III creeping in with deadlines, proposals, launches etc. Unfortunately, it can sometimes be the case that type III gets mis-categorised in hindsight.</p><p>This sent me down a rabbit-hole researching cognitive biases – specifically those related to perception and hindsight<sup>[<a href="#footnote-3">3</a>]</sup>.</p><p>The benefit (and detriment) of hindsight is that our negative experiences gradually fade. This is a form of cognitive bias called the <strong><a href="https://www.niu.edu/jskowronski/publications/WalkerSkowronski2009.pdf?ref=adammalone.net">Fading Effect Bias</a> (<a href="https://en.wikipedia.org/wiki/Fading_affect_bias?ref=adammalone.net">FAB</a>)</strong>.</p><blockquote>The FAB shows that the emotion associated with negative event memories generally fades faster than the emotion associated with positive event memories.</blockquote><p>Drawing together the threads of the Fun Scale and the Fading Effect Bias, it's easy to see how type III fun experienced during delivery could be misappropriated to type I or II after the fact.</p><p>After a nightmare project with a difficult client, people have the propensity to reduce the negativity they just experienced. The team may celebrate the launch and look back with positivity over the shared experience, but in reality the project was shitty and nobody was actually having any fun.</p><p>I've got a few empathy-free stories from projects I've either lead or participated in, however I always use one story as my example of:</p><!--kg-card-begin: markdown--><ol>
<li>How not to work in a blended team</li>
<li>How not to deliver with empathy</li>
<li>A cautionary tale of old school project managers</li>
</ol>
<!--kg-card-end: markdown--><p>The client project manager was an angry person. They shouted when they didn't get their way. They demanded that my team be available during public holidays to support their team. They categorically stated that no change requests would be approved. The oddest one was when they demanded I define what a linked list was to prove I knew about technology.</p><p>Storming around from project room to project room, they effused an oppressive air to both my team and their own team. I acted as my team's umbrella to diffuse their anger, explain rationally why I would not be letting the team work during their holidays, and walk the client project manager through change requests.</p><p>By diverting all of their frustration to me, I'd assumed a burden that my team detected and rallied behind in support; although not without taking on some of that burden themselves.</p><p>In the end, we delivered what we'd promised, to deadlines and to a high standard. However the stress along the way could so easily have been mitigated with empathy, understanding, and communication. The client project manager's high blood pressure too, may also have been mitigated.</p><p>In essence, a huge fuss was made over nothing, and as a result a whole lot more type III fun was experienced.</p><h3 id="how-do-i-deliver-with-empathy">How do I deliver with empathy?</h3><p>By delivering with empathy, professionals can enjoy project work even if it's tough, and look back without the Fading Effect Bias occluding their view.</p><p>Regardless of the title, job description, seniority, or company, I have a firm belief that we are all part of <a href="https://www.youtube.com/watch?v=OeKoh2zHg1o&ref=adammalone.net">Team Human</a>. Everyone is an individual with thoughts, feelings, and emotions, and efficient delivery relies on people getting into a <a href="https://www.verywellmind.com/what-is-flow-2794768?ref=adammalone.net">flow state</a> without undue stressors. </p><p>That all being said, I'm also not an idealist. I realise that sometimes unforeseen events occur, causing slow-downs, overruns, and days where a team needs to work late to deliver on goals that they have committed to. </p><p>It is because of this that I think the mark of a person is more accurately portrayed not when everything is going right, but when everything is going wrong. How a person reacts and manages that situation will ripple out and impact not only their team, but others beyond that too. </p><p>I have some general rules for continuous empathetic delivery during projects:</p><p><strong>1. Check in</strong></p><p>The simplest thing I recommend for team-members, clients, and stakeholders is to check in. As simple as it seems, going over and above the typical "how are you" rally evokes deeper and more meaningful connections that allow people to know each other as individuals rather than just names.</p><p>For the vast majority of projects, what is being delivered is less important than major life events, and understanding what's going on in people's lives can have an immediate cooling effect when things don't go to plan. Focusing only on the microcosm of the issue at hand is superseded by taking in the wider contextual situation to help understanding and empathy.</p><p><strong>2. Mentor up, down, and sideways</strong></p><p>Mentoring is often seen as a senior person providing advice to a junior person. Too often, I see more senior people completely out of touch with junior people, their lived experiences, and their goals.</p><p>I believe mentoring should be undertaken up, down, and sideways because being a more effective leader comes not only from learning how to lead from seniors, but also from receiving continuous constructive feedback from the team being managed.</p><p>Having the knowledge about what works and what doesn't will allow leaders to tune their leadership style to both get the most out of their team as well as be more understanding of the needs of the team and provide them with the best delivery experience.</p><p><strong>3. Group Therapy</strong></p><p>Not the traditional group therapy and not one of the <a href="https://podcasts.apple.com/us/podcast/above-beyond-group-therapy/id286889904?ref=adammalone.net">podcasts I listen to</a>; rather an informal and semi-regular session outside of standard delivery ceremonies to get together as a team and discuss thoughts and feelings.</p><p>Once again I'll rely on an old adage:</p><blockquote>A problem shared is a problem halved.</blockquote><p>I find it hugely beneficial to spend time as a team running through hopes, fears, and ideas person by person. In my experience, having a <em>safe space</em> to air frustrations, admit imposter syndrome, and receive support and advice from the people you're in the trenches with day-by-day is the most efficient way to prevent teammates stewing in discomfort.</p><p>I aim to run a session every month – typically at the end of the day in a casual setting. I'll introduce the session, share my own thoughts and feelings to prevent the awkwardness of going first, and then ask other team members to volunteer their own.</p><p>Not only is this a phenomenal method of allowing the group to support itself, but patterns may quickly emerge through open conversation that allow the astute to prevent risks and issues from emerging.</p><p>I've also found that this is an effective mitigation strategy to <a href="https://en.wikipedia.org/wiki/Impostor_syndrome?ref=adammalone.net">Imposter Syndrome</a>. By communicating challenges to the group, individuals are prevented from living inside their own head and the group is able to help recognise this most persistent feeling of self-doubt.</p><p><strong>4. Share personal targets and create group goals</strong></p><p>An extension of <em>Group Therapy</em>, I encourage the sharing of personal targets to allow the group to continually work towards and for each other's successes. With the knowledge of what success means to someone else, your own delivery style may be adjusted to be more empathy driven.</p><p>A step further than personal targets is group goals. The group should decide on what they want to achieve (aside from completing the project), as a combined goal will allow the entire team to work towards something that they have all become invested in together.</p><p><strong>5. Become invested</strong></p><p>Being invested as a team, even those blended with clients, is my final recommendation of delivering with empathy. Being invested as a combined team: in each other, in goals, and in wellbeing. The aim of becoming invested is to make everyone put people first, and the challenge second.</p><p>As someone who follows technology and tech blogs closely, the concept of the <a href="https://codeascraft.com/2012/05/22/blameless-postmortems/?ref=adammalone.net">blameless post-mortem</a> is one that's stuck with me since I first learned about it. This is especially true as someone who has caused 1-2 of these post-mortems to be created<sup>[<a href="#footnote-4">4</a>]</sup>.</p><p>The sinking feeling in your stomach when you screw up/take a platform down/fall behind on a deadline shouldn't be compounded further by anger. Being invested means that individuals come first, and the solution to the problem is worked on not out of fear of further reprisal, but out of the fun of the challenge itself.</p><h3 id="final-thoughts">Final Thoughts</h3><p>While these thoughts are very much my own personal brand of delivering with empathy, they don't form an exhaustive list and I enjoy learning other techniques too.</p><p>I try to share where I am able with client project managers and stakeholders because the thoughts of the team, percolated, provide far more colour and substance than my thoughts alone.</p><h3 id="footnotes">Footnotes</h3><!--kg-card-begin: html--><p>
    <a name="footnote-1">1</a>: What the client did say was "I am the client, and I am paying you to do as I tell you". Yeah right.
    <br />
    <a name="footnote-2">2</a>: My most recent type III experience was when we accidentally walked too far around a ridge above the <a href="https://bushwalkingnsw.com/walk.php?nid=198&ref=adammalone.net">Colo River</a> and ended up having to descend a mountaineering track under encroaching darkness. <a href="http://www.bushwalk.com/forum/viewtopic.php?f=36&t=18552&ref=adammalone.net">Other</a> <a href="https://djm74.blogspot.com/2011/03/crawfords-lookout-to-wollemi-creek.html?ref=adammalone.net#.VGBTmohXerU">people</a> have fallen foul of the same track though so it doesn't feel too bad in hindsight.
    <br />
  <a name="footnote-3">3</a>: Another good cognitive bias to read up on is the <a href="https://rationalwiki.org/wiki/Hindsight_bias?ref=adammalone.net">hindsight bias</a>.
    <br />
  <a name="footnote43">4</a>: I may have accidentally caused a couple of major global publications to disappear for an hour due to some incorrect configuration options I set.
</p><!--kg-card-end: html--> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Cache purging with Ghost and Cloudflare Workers ]]></title>
        <description><![CDATA[ As I have moved my blog platform over to Ghost, I&#39;ve realised that there will be
other parts of my technical ecosystem that will keep me busy tinkering. One such
part of that is how to cache every page on this site (effectively) permanently
whilst also allowing new ]]></description>
        <link>https://www.adammalone.net/cache-purging-with-ghost-and-cloudflare-workers/</link>
        <guid isPermaLink="false">5f4163fd7c8d3d780533a5d3</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 26 Aug 2020 06:32:37 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1545987796-200677ee1011?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>As I have moved my blog platform over to Ghost, I've realised that there will be other parts of my technical ecosystem that will keep me busy tinkering. One such part of that is how to cache every page on this site (effectively) permanently whilst also allowing new posts and updates to be seen.</p><p>I won't belabour the point about <em>active purging</em> as other people online have covered it far more accurately and succinctly than I could. My aim is simply to reduce the amount of origin hits my server receives whilst simultaneously keeping pages in the CDN for as long as possible to heighten the chance of really fast delivery.</p><p>One thing I noticed a couple of days after publishing my last article was that the RSS feed (and thus the auto-tweeter) hadn't updated. I knew I'd been pretty aggressive with Cloudflare caches so I turned to Google to find out how Ghost could purge it.</p><p>Quickly enough, I came across <a href="https://www.paolotagliaferri.com/cloudflare-cache-purge-with-ghost-webhook/?ref=adammalone.net">Paolo's blog</a> authored just a few days prior about connecting Ghost with <a href="https://workers.cloudflare.com/?ref=adammalone.net">Cloudflare Workers</a> to purge the cache on site change. I followed the instructions and they worked perfectly, however I wanted to take his work a step further and:</p><!--kg-card-begin: markdown--><ol>
<li>Change the logic around authentication</li>
<li>Only trigger Workers on page publication or update</li>
<li>Only purge the cache for specific pages rather than my whole domain</li>
</ol>
<!--kg-card-end: markdown--><h3 id="changing-the-authentication">Changing the authentication</h3><p>As many times as I tried, I couldn't seem to get the Cloudflare Worker to authenticate with username and password parameters in the URL. With that in mind, I decided that I'd change the way that I was constructing webhooks to use parameters in the path itself. I constructed an arbitrary path that I could use to trigger different cache purges on the same Worker as well as checking the username and password were correct.</p><pre><code class="language-javascript">const publishPath = `/cf-purge/purge/publish/${WEBHOOK_USER}/${WEBHOOK_PASSWORD}/`;
const updatePath = `/cf-purge/purge/update/${WEBHOOK_USER}/${WEBHOOK_PASSWORD}/`;
</code></pre><p>While there are some obvious flaws using this technique (not least that the username and password are directly in the URL), we are at least POSTing over HTTPS directly from my server to my Worker. There was also a <a href="https://serverfault.com/a/541206?ref=adammalone.net">thread I found</a> about how basic auth parameters were becoming less supported, so I'm currently comfortable with what I changed.</p><h3 id="triggering-on-publish-or-update">Triggering on publish or update</h3><p>When I publish a post on this blog, the only pages that really need to be purged are the front page where the list of articles is, and the RSS feed which is what <a href="https://dlvrit.com/?ref=adammalone.net">dlvrit.com</a> uses to post articles directly to my Twitter account.</p><p>When I update a post, it's highly unlikely that the front page or RSS will actually need changing as it'll more likely be a typo or additional edit to the post. In this case, the only item in the cache that I need to purge is the post itself.</p><p>Rather than using a single webhook for any site change, I used two different webhooks that posted to different paths on the same Worker. The top is triggered when I publish a post, the bottom is triggered when I update a post that is already published.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/08/ghost-webhook.png" class="kg-image" alt srcset="/content/images/size/w600/2020/08/ghost-webhook.png 600w, /content/images/size/w1000/2020/08/ghost-webhook.png 1000w, /content/images/size/w1600/2020/08/ghost-webhook.png 1600w, /content/images/2020/08/ghost-webhook.png 2268w" sizes="(min-width: 720px) 720px"><figcaption>Ghost webooks</figcaption></figure><h3 id="selectively-purging">Selectively purging</h3><p>I try to be as defensive as possible about when I purge; not because I'm hosting mission critical material but more because I try to follow best practices at home where possible – practicing what I preach.</p><p>I did some digging into what Ghost sends in the webhook for posts and updates so I could find out what I could switch on in order to send to Cloudflare for purging.</p><p>My first instinct was to dump what came to the Worker itself, but that proved to take additional time with both my lack of JavaScript knowledge as well as deploying to the Worker. I <em>could</em> have set up <a href="https://ngrok.com/?ref=adammalone.net">ngrok</a> to do it locally. I could have used <code>wrangler dev</code>. All I wanted to see was the POSTed body though so I found <a href="https://pipedream.com/?ref=adammalone.net">Pipedream</a> which acts as a big endpoint to examine what is being sent to it.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/08/Pipedream.png" class="kg-image" alt srcset="/content/images/size/w600/2020/08/Pipedream.png 600w, /content/images/size/w1000/2020/08/Pipedream.png 1000w, /content/images/size/w1600/2020/08/Pipedream.png 1600w, /content/images/2020/08/Pipedream.png 2094w" sizes="(min-width: 720px) 720px"><figcaption>Pipedream Ghost output</figcaption></figure><p>The body content for the <code>Published post updated</code> hook looks similar to the above picture and has pretty much everything you'd want to use on a Worker. The key item for me was the <code>body.post.current.url</code> value as this is what I'd be purging.</p><p>With that captured, I could switch on whether the webhook path was <code>publish</code> or <code>update</code> and send different POST data to Cloudflare.</p><p>I've created a <a href="https://gist.github.com/typhonius/e43590bd83935baff2151e2c0c732b69?ref=adammalone.net">gist</a> with my tweaks to Paolo's code for people who would like similar functionality with different URLs. The main changes are some code I stole from <a href="https://developers.cloudflare.com/workers/examples/read-post?ref=adammalone.net">Cloudflare's documentation</a> to read the POSTed body and the code that changes how I purge based on path.</p><!--kg-card-begin: html--><script src="https://gist.github.com/typhonius/e43590bd83935baff2151e2c0c732b69.js"></script><!--kg-card-end: html--><p>As a final note, I did try to use the <a href="https://developers.cloudflare.com/workers/runtime-apis/cache?ref=adammalone.net">Cache API</a> from the Worker to directly purge the cache object without having to undertake a separate API request. After trying a couple of times, the purge was coming back successful, but I was still seeing cache hits with my test script.</p><p><a href="https://community.cloudflare.com/t/in-cloudflare-workers-i-am-seeking-clarification-on-calling-delete-from-the-cache-api-and-not-replicated-to-any-other-data-centers-or-whats-the-best-way-to-purge-custom-cache-keys-on-all-data-centers-from-inside-of-worker/174247?ref=adammalone.net">Turns out, other people had run into this as well</a>. Long story short, the Cache API only works within the datacentre the Worker is based in and as my Ghost server is geographically distanced from me, I wasn't seeing the purge from my local POP.</p><p>This wouldn't have been a good path for me to pursue anyway as I would want consistent purging and caching regardless of location.</p><h3 id="testing">Testing</h3><p>Testing this was working was the easiest part of this whole process and something that can be accomplished with only a terminal. I picked a handful of paths that wouldn't change as well as a handful of paths that would. I then wrote a quick bash one-liner to curl these URLs and check their status after I created new pages and updated existing ones.</p><pre><code># Testing after publishing a new page
$ for i in / /rss/ /about-me/ /public-keys/ /post/github/ /post/personal-resets/ ; do echo $i; curl -skILXGET https://www.adammalone.net$i | grep cf-cache-status; done
/
cf-cache-status: MISS
/rss/
cf-cache-status: MISS
/about-me/
cf-cache-status: HIT
/public-keys/
cf-cache-status: HIT
/post/github/
cf-cache-status: HIT
/post/personal-resets/
cf-cache-status: HIT

# Testing after updating the Github post
$ for i in / /rss/ /about-me/ /public-keys/ /post/github/ /post/personal-resets/ ; do echo $i; curl -skILXGET https://www.adammalone.net$i | grep cf-cache-status; done
/
cf-cache-status: HIT
/rss/
cf-cache-status: HIT
/about-me/
cf-cache-status: HIT
/public-keys/
cf-cache-status: HIT
/post/github/
cf-cache-status: MISS
/post/personal-resets/
cf-cache-status: HIT</code></pre><h3 id="where-can-i-improve">Where can I improve?</h3><p>While the quality of my JavaScript is definitely something that is left to be desired, there are a handful of things that I'd like to improve at some point.</p><ul><li>Detect if a post has changed its canonical URL and purge the <em>old</em> URL rather than the new one. My assumption would be that this is in <code>body.post.previous</code> but I haven't tested yet.</li><li><a href="https://community.cloudflare.com/t/how-do-i-read-the-request-body-as-json/155393/3?ref=adammalone.net">There are a few other ways to read the request body</a> that look cleaner than the code I've got in there currently. These would be interesting to explore.</li><li>There has to be a better way to secure this than passing authentication parameters in the URL. While I will explore locking the worker to my server IP, I'd like to look at other mechanisms for security. Creating a tunnel directly from my server to Cloudflare would be ideal (and possible with cloudflared/Argo), but maybe not pragmatic for my efforts and budget.</li><li>Extend the cache clear by also pre-filling the cache so users get both warm caches and new content.</li></ul> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Personal Resets ]]></title>
        <description><![CDATA[ I find it&#39;s highly beneficial to do a personal reset every so often.

While I can&#39;t speak for others, for me personal resets come in a number of
different forms although usually over the course of at least a weekend. They
take me out of my ]]></description>
        <link>https://www.adammalone.net/personal-resets/</link>
        <guid isPermaLink="false">5f38f63a212c8c7acd0086bc</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 20 Aug 2020 09:34:38 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1520789304460-c5885e7c675d?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>I find it's highly beneficial to do a personal reset every so often.</p><p>While I can't speak for others, for me personal resets come in a number of different forms although usually over the course of at least a weekend. They take me out of my comfort zone and allow me to process information with a fresh perspective.</p><p>Recently, I read <em>How to Change Your Mind</em> by Michael Pollan, and I absolutely loved the following quote:</p><blockquote>Mendel Kaelen, a Dutch postdoc in the Imperial lab, proposes a more extended snow metaphor:<br><br>"Think of the brain as a hill covered in snow, and thoughts as sleds gliding down that hill. As one sled after another goes down the hill a small number of main trails will appear in the snow. And every time a new sled goes down, it will be drawn into preexisting trails, almost like a magnet. In time it becomes more and more difficult to glide down the hill on any other path or in a different direction."</blockquote><p>It is of course, completely obvious in retrospect that this is what occurs in our brains as we succumb to patterns, repetitive processes and the grooves we ourselves carve in the snow covered hill of our minds.</p><p>This effect has been inflated further with a pandemic that requires a reduction in travel and extraneous activity, so the need to reset is more important now than previously. Regardless of the amount of five minute breathing exercises or lunchtime walks that are taken, I believe they are of next to no use when the majority of the day/week is broadly the same – hence the need for more aggressive reseting.</p><h3 id="how-i-reset">How I reset</h3><p>I like hiking, and I like yoga.</p><p>I've reset twice recently and used each method to take myself out of my comfort zone and the normal routines that occupy my day and my headspace.</p><p>Whilst I've been unable to attend an <em>actual </em>yoga retreat, I recently spent a weekend at home attending my own <a href="https://www.alomoves.com/series/weekend-yoga-reset?ref=adammalone.net">stay at home retreat</a>. Three days and five sessions provided me with both the exercise and rest required to come back to the next week more renewed than normally I would.</p><p>Each of the active sessions were paired so well with stretching and remediation. The weekend culminated in guided meditation that provided me with an opportunity to step outside myself and observe my strengths and weaknesses. A chance to spend time introspectively examining the parts of myself I liked, and those I wished to discard.</p><p>More recently I spent a weekend hiking the <a href="http://www.wildwalks.com/bushwalking-and-hiking-in-nsw/royal-national-park/the-coast-track.html?ref=adammalone.net">Coast Track in Royal National Park</a>. While not the most technically difficult hike, it is one of the most continuously beautiful. With the cliffs, beaches, and rocks of the New South Wales coast to our left over the two day expedition, I was bombarded, continuously, with natural beauty.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="/content/images/2020/08/61924011028__CBC3FFC9-30BD-4040-82F1-F0020E9BA169-2.jpeg" width="4032" height="3024" alt srcset="/content/images/size/w600/2020/08/61924011028__CBC3FFC9-30BD-4040-82F1-F0020E9BA169-2.jpeg 600w, /content/images/size/w1000/2020/08/61924011028__CBC3FFC9-30BD-4040-82F1-F0020E9BA169-2.jpeg 1000w, /content/images/size/w1600/2020/08/61924011028__CBC3FFC9-30BD-4040-82F1-F0020E9BA169-2.jpeg 1600w, /content/images/size/w2400/2020/08/61924011028__CBC3FFC9-30BD-4040-82F1-F0020E9BA169-2.jpeg 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="/content/images/2020/08/IMG_3325-1.jpeg" width="4032" height="3024" alt srcset="/content/images/size/w600/2020/08/IMG_3325-1.jpeg 600w, /content/images/size/w1000/2020/08/IMG_3325-1.jpeg 1000w, /content/images/size/w1600/2020/08/IMG_3325-1.jpeg 1600w, /content/images/size/w2400/2020/08/IMG_3325-1.jpeg 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="/content/images/2020/08/IMG_3318.JPG" width="3840" height="2160" alt srcset="/content/images/size/w600/2020/08/IMG_3318.JPG 600w, /content/images/size/w1000/2020/08/IMG_3318.JPG 1000w, /content/images/size/w1600/2020/08/IMG_3318.JPG 1600w, /content/images/size/w2400/2020/08/IMG_3318.JPG 2400w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="/content/images/2020/08/IMG_3312.jpeg" width="4032" height="3024" alt srcset="/content/images/size/w600/2020/08/IMG_3312.jpeg 600w, /content/images/size/w1000/2020/08/IMG_3312.jpeg 1000w, /content/images/size/w1600/2020/08/IMG_3312.jpeg 1600w, /content/images/size/w2400/2020/08/IMG_3312.jpeg 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="/content/images/2020/08/IMG_3313.jpeg" width="4032" height="3024" alt srcset="/content/images/size/w600/2020/08/IMG_3313.jpeg 600w, /content/images/size/w1000/2020/08/IMG_3313.jpeg 1000w, /content/images/size/w1600/2020/08/IMG_3313.jpeg 1600w, /content/images/size/w2400/2020/08/IMG_3313.jpeg 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="/content/images/2020/08/IMG_3324.jpeg" width="4032" height="3024" alt srcset="/content/images/size/w600/2020/08/IMG_3324.jpeg 600w, /content/images/size/w1000/2020/08/IMG_3324.jpeg 1000w, /content/images/size/w1600/2020/08/IMG_3324.jpeg 1600w, /content/images/size/w2400/2020/08/IMG_3324.jpeg 2400w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>Pictures from the Coast Track</figcaption></figure><p>The Coast Track also features one of my favourite places to camp – near Garie Beach. Being away from the wider world, <em>most</em> other people, and alone with friends and nature is something I try to juggle a few times a year. Not only does this provide me with a dedicated opportunity for male bonding with close friends, but the physical removal is symbolic of the removal from my comfort zone.</p><p>Hiking equipment is probably one of the only things I'll lavish upon myself, so an excuse to use is frequently warranted.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="/content/images/2020/08/IMG_3323.jpeg" class="kg-image" alt><figcaption>Garie Beach, NSW</figcaption></figure><h3 id="how-you-reset">How you reset</h3><p>This one is up to you. Take yourself out of your comfort zone, and then keep doing it. I like to imagine the comfort zone in the same way I imagine the past.</p><blockquote>Nice place to visit, but you wouldn't want to live there.</blockquote><p>The <em><a href="https://rationalwiki.org/wiki/Good_old_days?ref=adammalone.net">Golden Age Fallacy</a></em> teaches us that now is objectively the best time. The Rational Wiki explains it far better than I could, and I think it's important to treat your comfort zone with the same degree of wariness as focusing too much on days gone by. It's a safe place to visit when you need it, but don't get lulled into the trap of luxuriating in its stagnating comfort.</p><p>This blog should act as a PSA to others, but also a reminder to me.</p><p>Don't get stuck in your grooves.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ New year, new blog ]]></title>
        <description><![CDATA[ It&#39;s been quite literally years that I&#39;ve been putting off updating my blog,
both in the underlying technology as well as the content that resides within.
While August is probably eight months too late to invoke the new year in a blog
title, it&#39;s ]]></description>
        <link>https://www.adammalone.net/into-2020/</link>
        <guid isPermaLink="false">5f34fa7c212c8c7acd0084ff</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 16 Aug 2020 09:00:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1577807482863-26b179859454?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>It's been quite literally years that I've been putting off updating my blog, both in the underlying technology as well as the content that resides within. While August is probably eight months too late to invoke the new year in a blog title, it's a theme that I can work around.</p><h3 id="the-high-level">The high level</h3><p>My blog in its old invocation has been around since May 2012, using mostly the same technology and thematic components. Its age however definitely shows when its compared to more modern SaaS based blogging platforms (I'm looking at you Medium).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="/content/images/2020/08/Malone_Thought_Repository___About_Adam_Malone.png" class="kg-image" alt srcset="/content/images/size/w600/2020/08/Malone_Thought_Repository___About_Adam_Malone.png 600w, /content/images/size/w1000/2020/08/Malone_Thought_Repository___About_Adam_Malone.png 1000w, /content/images/size/w1600/2020/08/Malone_Thought_Repository___About_Adam_Malone.png 1600w, /content/images/size/w2400/2020/08/Malone_Thought_Repository___About_Adam_Malone.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>My old website.</figcaption></figure><p>One thing I was passionate about back in 2012 and <em>still</em> remain passionate about now is open-source software and keeping my eye in technically. It would be out of character for me to:</p><!--kg-card-begin: markdown--><ol>
<li>Use proprietary technology to power my personal blog</li>
<li>Use a SaaS platform which takes all of the fun out of understanding the tech and learning something new</li>
</ol>
<!--kg-card-end: markdown--><p>To keep my blog, and underlying server free of vulnerabilities to this point, I've been <em>semi-</em>regimented about updating Drupal 7 soon after each release. <a href="https://www.drupal.org/core/release-cycle-overview?ref=adammalone.net">To keep myself free of technical debt in the future</a> however, I needed to update to Drupal 8. Because this was an involved task, I waited long enough for <a href="https://www.drupal.org/about/9?ref=adammalone.net">Drupal 9</a> to be released and I honestly couldn't be bothered.</p><p>The other thing that I couldn't get past for years is how much the 'look and feel' of my blog stunk.</p><p>This yearning for a more basic platform as well as one that looks great out of the box put me onto <a href="https://ghost.org/?ref=adammalone.net">Ghost</a>. It's open-source, it doesn't really allow much customisation (so it's easier to update), and looks literally 1000x better than what I made myself.</p><h3 id="the-details">The details</h3><p>While I could have used the SaaS version of Ghost and paid for the convenience, my technical eye likes to remain involved so I went down the DIY route. I've documented the process I took below using Ansible to provision it onto one of my servers.</p><p>Looking at <a href="https://ghost.org/docs/concepts/hosting/?ref=adammalone.net">Ghost's requirements</a>, I already had pretty much everything installed on my server. That said, I've been using Apache instead of NGINX, and I religiously use configuration management so I'm more safe in the event of catastrophe. I recently had a home server die on me and thanks to Ansible was back up and running 2 hours after I had a new USB stick.</p><p>My server's <code>main.yml</code> for ghost looks something like this (unrelated components have been removed):</p><pre><code class="language-yaml"># Apache configuration
apache_listen_port: 80
apache_remove_default_vhost: true
apache_ssl_protocol: "all -SSLv3"
apache_ssl_cipher_suite: "HIGH:!aNULL"
apache_mods_enabled:
  - rewrite.load
  - ssl.load
  - proxy.load
  - proxy_http.load
  
apache_global_vhost_settings: |
  ServerName localhost
  &lt;VirtualHost *:80&gt;
    &lt;Location /&gt;
        Deny from all
        Options None
        ErrorDocument 403 Forbidden.
    &lt;/Location&gt;
  &lt;/VirtualHost&gt;

apache_vhosts:
  - servername: "adammalone.net"
    serveralias: "www.adammalone.net"
    documentroot: "/var/www/html/ghost"
    extra_parameters: |
            ProxyRequests Off
            &lt;proxy&gt;
            # Require all granted
            &lt;/proxy&gt;
            AllowEncodedSlashes On
            ProxyPass / http://127.0.0.1:2368/ nocanon
            ProxyPassReverse / http://127.0.0.1:2368/
            
apache_vhosts_ssl:
  - servername: "adammalone.net"
    serveralias: "www.adammalone.net"
    documentroot: "/var/www/html/ghost"
    certificate_file: "/etc/ansible/keys/adammalone.crt"
    certificate_key_file: "/etc/ansible/keys/adammalone.key"
    extra_parameters: |
            RewriteEngine on
            RewriteCond %{HTTP_HOST} ^adammalone.net$
            RewriteRule ^/(.*)$ https://www.adammalone.net/$1 [L,R=301]
            ProxyRequests Off
            &lt;proxy&gt;
            # Require all granted
            &lt;/proxy&gt;
            AllowEncodedSlashes On
            ProxyPass / http://127.0.0.1:2368/ nocanon
            ProxyPassReverse / http://127.0.0.1:2368/
            
mysql_packages:
  - mariadb-client
  - mariadb-server
  - python-mysqldb
  
mysql_expire_logs_days: "7"
mysql_root_password: 'foobar'
mysql_bind_address: '127.0.0.1'
mysql_key_buffer_size: "164M"
mysql_max_allowed_packet: "64M"
mysql_table_open_cache: "750"
mysql_innodb_buffer_pool_size: "164M"
mysql_config_include_files:
  - name: 'my.overrides.cnf'
    src: '/etc/ansible/cnf/my.overrides.cnf'

mysql_databases:
  - {name: ghost, encoding: utf8mb4, collation: utf8mb4_general_ci}
mysql_users:
        - {name: ghost, host: localhost, password: 'foobar)', priv: 'ghost.*:ALL', append_privs: 'yes', state: 'present' }

nodejs_version: "12.x"
nodejs_install_npm_user: "typhonius"
nodejs_npm_global_packages:
  - ghost-cli
</code></pre><p>My <code>server.yml</code> looks similar to the below</p><pre><code class="language-yaml">- hosts: localhost
  vars_files:
    - vars/main.yml
  roles:
    - { role: geerlingguy.apache }
    - { role: geerlingguy.mysql }
    - { role: geerlingguy.nodejs }</code></pre><h3 id="challenges">Challenges</h3><p>One of the stumbling blocks I spent some time fixing were the MySQL configuration in <code>/etc/ansible/cnf/my.overrides.cnf</code>. I kept getting the following error when running <code>ghost install</code>.</p><pre><code class="language-bash">Message: alter table `members_stripe_customers` add unique `members_stripe_customers_customer_id_unique`(`customer_id`) - ER_INDEX_COLUMN_TOO_LONG: Index column size too large. The maximum column size is 767 bytes.</code></pre><p>Looking around online, I realised that I needed that I'd be able to fix this issue by changing the <a href="https://dev.mysql.com/doc/refman/5.6/en/innodb-row-format.html?ref=adammalone.net">innodb_default_row_format</a> MySQL variable to <code>dynamic</code> rather than <code>compact</code>. The documentation told me that <code>dynamic</code> did much the same as <code>compact</code> but allows for variable length column lengths i.e. those over 767 bytes long.</p><blockquote>I think I could also have changed by default encoding from <code>utf8mb4</code> to <code>utf8</code>, but doing so would have prevented me from being able to use the extended character set (hello emojis?).</blockquote><p>The Ansible role I use to manage MySQL had no parameter that I could easily adjust to change row formats, but it does have the ability to add custom parameters by file inclusion. My custom override is simple and below.</p><pre><code class="language-sql">innodb_default_row_format = 'DYNAMIC'</code></pre><p>Another challenge I faced was how to configure Apache to both proxy requests to a node server listening on a local port as well as passing through the correct parameters in the right format.</p><p>The keys to solving that particular problem were to ensure that I had the right Apache mods loaded (rewrite, ssl, proxy, proxy_http) and that I used the right parameters in my vhost directives. I needed to set <code>AllowEncodedSlashes</code> <a href="https://httpd.apache.org/docs/current/mod/core.html?ref=adammalone.net#allowencodedslashes">to on</a> and add <code>nocanon</code> to <a href="https://httpd.apache.org/docs/current/mod/mod_proxy.html?ref=adammalone.net#proxypass">ProxyPass</a>. Both of these allowed me to pass URLs (and slashes) through from Apache to the Node backend and get the right results from the Ghost API.</p><p>The final issue I had was in updating the injected scripts I was planning to use in the header and footer. My access to the API was blocked when I tried to add anything with <code>&lt;script&gt;</code> tags. Understandably my first (and correct) suspect was Cloudflare. </p><figure class="kg-card kg-image-card"><img src="/content/images/2020/08/Firewall___adammalone_net___Account___Cloudflare_-_Web_Performance___Security.png" class="kg-image" alt srcset="/content/images/size/w600/2020/08/Firewall___adammalone_net___Account___Cloudflare_-_Web_Performance___Security.png 600w, /content/images/size/w1000/2020/08/Firewall___adammalone_net___Account___Cloudflare_-_Web_Performance___Security.png 1000w, /content/images/size/w1600/2020/08/Firewall___adammalone_net___Account___Cloudflare_-_Web_Performance___Security.png 1600w, /content/images/2020/08/Firewall___adammalone_net___Account___Cloudflare_-_Web_Performance___Security.png 1752w" sizes="(min-width: 720px) 720px"></figure><p>I adjusted my configuration to restrict my admin URL to my home IP and VPN. I also added a firewall rule to remove managed rules on those path so I could adjust the settings and save to the database. As an aside, I did also work out that I could manually alter the codeinjection_head/codeinjection_foot keys in the settings table before restarting ghost to allow the changes to take effect.</p><p>With these relatively trivial issues fixed, I ported my content across, added some far nicer images from <a href="https://unsplash.com/?ref=adammalone.net">Unsplash</a>, and set up my site how I wanted.</p><h3 id="the-future">The future</h3><p>I've missed writing.</p><p>I've long said that creativity manifests itself in a larger number of ways than images, music, words, and pictures. I postulate that creativity also comes from tech architecture, some neat code to solve a problem, or pretty much anything that can automate something away or increase utility.</p><p>This blog has been a place for me to creatively speak about my creative endeavours, and I want to continue doing that. I'll try to write about the things I'm playing with and intersperse with my own ideas, opinions and thoughts where appropriate.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ IP Restrictions behind Cloudflare and Varnish ]]></title>
        <description><![CDATA[ I&#39;ve recently been working with a client using Drupal, Varnish, and Cloudflare
as part of their digital transformation journey. The client had requirements to
ensure that requests coming in through Cloudflare, which should be all requests,
would include a check to ensure only their internal IP ranges and ]]></description>
        <link>https://www.adammalone.net/ip-restrictions-behind-cloudflare-and-varnish/</link>
        <guid isPermaLink="false">5f33a19021b8f9692ae93a3c</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 09 Oct 2017 22:19:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1542577195-d562c6698ff3?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>I've recently been working with a client using Drupal, Varnish, and Cloudflare as part of their digital transformation journey. The client had requirements to ensure that requests coming in through Cloudflare, which should be all requests, would include a check to ensure only their internal IP ranges and ours would be able to access administration pages on the site.</p><p>Looking at the Cloudflare interface, I can see that while there are white and black lists, none were able to be based on paths. So let's allow Varnish to come to the rescue.</p><p>Initially, I tried using the client.ip variable provided by Varnish to check against acls, but upon further investigation when it didn't work (including a deep dive into the Varnish source), I realised that Varnish was using the IP address garnered from the socket connection made to the server. With Cloudflare sitting between the user and Varnish, Varnish was seeing the client.ip as Cloudflare not the user.</p><p>From here, I attempted to compare the CF-Connecting-IP header sent by Cloudflare to the acls defined. Varnish didn't like this one bit as all headers are defined as strings whereas the client.ip (and what acls expect) is a special structure dissimilar to a string.</p><p>Even further investigation revealed that Varnish 4's included vmod 'std' includes a method which converts strings to this special IP structure. Putting all of these things together, we come up with the following snippet to block access to /user and /admin for users not accessing via the defined IPs.</p><p>Finally, and most importantly, remember to define your acls at the very top of your vcl; especially if you're concatenating multiple vcl files.</p><pre><code class="language-puppet">acl deloitte {
  "1.2.3.4";
  "2.3.4.5";
}
 
acl client {
  "5.6.7.8";
  "6.7.8.9";
}
 
sub vcl_recv {
  # Block access for the administrative part of the site for users not in Deloitte or client.
  if (req.http.CF-Connecting-IP) {
    if ((req.url ~ "^/user" || req.url ~ "^/admin") &amp;&amp; !(std.ip(req.http.CF-Connecting-IP, "0.0.0.0") ~ deloitte || std.ip(req.http.CF-Connecting-IP, "0.0.0.0") ~ client)) {
      return (synth(403, "Forbidden"));
    }
  }
}
 
sub vcl_synth {
  if (resp.status == 403) {
    set resp.http.Content-Type = "text/html; charset=utf-8";
    synthetic( {"&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;title&gt;"} + resp.status + " " + resp.reason + {"&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;p&gt;Error "} + resp.status + " " + resp.reason + {"&lt;/p&gt;
  &lt;/body&gt;
&lt;/html&gt;
"} );
    return (deliver);
  }
}</code></pre> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Using Toran Proxy to speed up Drupal builds ]]></title>
        <description><![CDATA[ Over the last couple of days an internal thread has been making the rounds at Acquia about speeding up Composer for Drupal builds. With Drupal 8, Lightning and the BLT project making heavy use of Composer to manage its dependencies, users frequently rebuilding from source, or those in remote regions ]]></description>
        <link>https://www.adammalone.net/using-toran-proxy-speed-drupal-builds/</link>
        <guid isPermaLink="false">5f33a0f121b8f9692ae93a1b</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 22 Dec 2016 01:51:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1542274368-443d694d79aa?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>Over the last couple of days an internal thread has been making the rounds at Acquia about speeding up <a href="https://getcomposer.org/?ref=adammalone.net">Composer</a> for Drupal builds. With Drupal 8, <a href="https://lightning.acquia.com/?ref=adammalone.net">Lightning</a> and the <a href="https://github.com/acquia/blt?ref=adammalone.net">BLT project</a> making heavy use of Composer to manage its dependencies, users frequently rebuilding from source, or those in remote regions with slow internet <a href="https://xkcd.com/303/?ref=adammalone.net">face a lot of dead time</a>.</p><p>Within the discussion, someone mentioned <a href="https://toranproxy.com/?ref=adammalone.net">Toran Proxy</a> which acts as a mirror for Packagist, GitHub and other repositories that store code libraries. Because I live in Australia, the available bandwidth to external repositories is sometimes extremely slow; this lead me to try Toran out.</p><p>My home development server uses Fedora 19 (Schrödinger’s Cat) and it's a place I frequently try out scripts, applications and other tooling to both verify it does what it says, and keep a technical eye in as my career drifts ever further away. I was able to translate the download/configuration <a href="https://toranproxy.com/download?ref=adammalone.net">instructions</a> into a Puppet manifest and have Toran deployed on my server in relatively short time however ran into issues with ensuring BLT (which uses packages.drupal.org) downloaded packages from the right places.</p><p>In the end, the solution which worked for me was to patch one line in one file (src/Toran/ProxyBundle/Command/CronCommand.php) of Toran Proxy, whilst ensuring I continually ran Toran's inbuilt cron to generate the right resources for my local build to pull in. Prior to doing this I was running into dependency issues that lead me down seven or eight different garden paths. My definitive guide to getting Toran Proxy set up with BLT is as follows and presumes all initial instructions provided by the Toran Proxy team have been followed:</p><ul><li>Apply the attached patch</li><li>Navigate to the /settings page and use the following image as a rough guide for configuring your instance</li></ul><figure class="kg-card kg-image-card"><img src="https://cdn.adammalone.net/cdn/farfuture/vivY62PdkZzzwQdtTPlKuAfHD0a7gy4yJcwPKDYPI20/mtime:1482370847/sites/adammalone/files/toran_proxy_0.png" class="kg-image" alt loading="lazy"></figure><ul><li>Run Toran cron by executing php bin/cron -v</li><li>Alter your BLT composer.json to use the following (changing the repo URL from toran.adammalone.net to the domain your toran instance runs on and removing the secure-http parameter if your mirror uses HTTPS)</li></ul><pre><code class="language-yaml">"config": {
  "secure-http": false
},
"repositories": {
  "0": {
    "type": "composer",
    "url": "http://toran.adammalone.net/repo/private/"
  },
  "1": {
    "type": "composer",
    "url": "http://toran.adammalone.net/repo/packagist/"
  },
  "2": {
    "packagist": false
  }
},</code></pre><ul><li>Cross your fingers and run composer install</li><li>If any parts of the build fail, examine your app/toran/config.yml file and ensure that all the Drupal packages (except Coder) are tagged with packages.drupal.org and all non-Drupal packages are tagged with packagist.org.</li><li>Create a new entry in crontab to run the Toran cron as frequently as desired so packages are continually updated.</li></ul><p>Your mileage may vary, but I was able to reduce a 40 minute build down to around 8 minutes as my laptop was sourcing libraries from a server 3 metres away rather than the other side of the World. The only slow downs for me were packages tagged with '*' or '-dev' as they bypass Toran and don't get added to my local cache.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Installing an Origin CA cert in Pound ]]></title>
        <description><![CDATA[ Recently I was approached by one of the Cloudflare channel team as they advised
all customers about Google&#39;s announcement
[https://security.googleblog.com/2016/10/distrusting-wosign-and-startcom.html] 
about distrusting SSL certificates from two certificate authorities (&quot;CAs&quot;):
WoSign and StartCom. Google&#39;s announcement joins Mozilla
[https: ]]></description>
        <link>https://www.adammalone.net/installing-origin-ca-cert-pound/</link>
        <guid isPermaLink="false">5f33a05f21b8f9692ae939fe</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 16 Dec 2016 04:30:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1505142468610-359e7d316be0?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>Recently I was approached by one of the Cloudflare channel team as they advised all customers about <a href="https://security.googleblog.com/2016/10/distrusting-wosign-and-startcom.html?ref=adammalone.net">Google's announcement</a> about distrusting SSL certificates from two certificate authorities ("CAs"): WoSign and StartCom. Google's announcement joins <a href="https://blog.mozilla.org/security/2016/10/24/distrusting-new-wosign-and-startcom-certificates/?ref=adammalone.net">Mozilla</a> and <a href="https://support.apple.com/en-us/HT204132?ref=adammalone.net">Apple</a> and now represents the majority of human-driven browsers.</p><p>As I was using a StartCom free SSL certificate, it was in my best interests to migrate off and find a more reputable CA. I needed to replace my certificate in two locations to ensure I could enable '<a href="https://support.cloudflare.com/hc/en-us/articles/200170416-What-do-the-SSL-options-mean-?ref=adammalone.net">Full SSL (Strict)</a>' and protect not only my edge at the CDN but also my Origin.</p><p>Looking to see whether Cloudflare had documented Pound as an SSL termination point, I saw a gap that this blog post aims to fill.</p><p><strong><strong>1. Obtain private key and origin certificate pair</strong></strong><br>After completing the steps to generate the private key and origin certificate, download both the private key and origin certificate in .pem format. Concatenate the private key, any CA certificates and the site certificate into a single .pem file.</p><p><strong><strong>2. Copy the combined pem file to your origin server</strong></strong><br>Copy the concatenated file and move it to the directory on your server where you will keep your key and certificate files. Typically this is in /etc/ssl/certs.</p><p><strong><strong>3. Locate your pound config file</strong></strong><br>Pound’s main configuration file is typically named pound.cfg. Possible locations for this file might be /etc/pound.cfg or /usr/local/etc/pound.cfg depending on the operating system in use.</p><p><strong><strong>4. The default pound.cfg file can be amended to direct traffic to specified sites</strong></strong><br>You will need to create a Service definition with, at minimum, Backend and Port parameters. This allows Pound to direct traffic to a backend once it has received traffic. Optionally HeadRequire can be used within multiple Service listeners to separate traffic based on request parameters.</p><pre><code class="language-apacheconf"># Global options:
User            "apache"
Group           "apache"
LogLevel       3
 
# Check backend every 20 secs:
Alive          20
 
# poundctl control socket
Control "/var/run/poundctl.socket"
 
# Backend service for any domains matching *.adammalone.net
Service
  HeadRequire "Host:.*adammalone.net.*"
    Backend
      Address 127.0.0.1
      Port 80
    End
End</code></pre><p><strong><strong>5. Add a ListenHTTPS block for SSL</strong></strong><br>Below is a simple example of Pound configured to use SSL. The entire ListenHTTPS block must be appended to the pound.cfg file to allow SSL. Optional parameters for adding headers and specifying ciphers further secure the application behind.</p><pre><code class="language-apacheconf"># SSL Termination
ListenHTTPS
  Address 0.0.0.0
  Port    443
  HeadRemove "X-Forwarded-Proto"
  AddHeader "X-Forwarded-Proto: https"
  Cert "/etc/ssl/certs/cloudflare.pem"
  Ciphers "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS"
End</code></pre><p>Ensure the Cert parameter matches the location of your combined pem file moved during step 2.</p><p><strong><strong>6. Test your Pound configuration before restarting</strong></strong><br>Best practice is to check your configuration files before restarting Pound as Poundwill not start if there are errors in the configuration. The following command will test your configuration files.</p><pre><code class="language-bash">$ pound -c starting... Config file /etc/pound.cfg is OK</code></pre><p><strong><strong>7. Restart Pound</strong></strong></p><pre><code class="language-bash">$ /etc/init.d/pound start</code></pre> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Migrating into multisite ]]></title>
        <description><![CDATA[ Quite often in my role as a Solutions Architect at Acquia, I&#39;ll see customers
looking to bring sites under the multisite banner in order to enact a more
controlled code governance model. Amalgamating codebases allows for a more
controlled site development experience where 50 different sites can be ]]></description>
        <link>https://www.adammalone.net/migrating-multisite/</link>
        <guid isPermaLink="false">5f339fd721b8f9692ae939e6</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 08 May 2016 23:50:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1553388365-50523dd1a335?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>Quite often in my role as a Solutions Architect at Acquia, I'll see customers looking to bring sites under the multisite banner in order to enact a more controlled code governance model. Amalgamating codebases allows for a more controlled site development experience where 50 different sites can be thought of as one cohesive unit rather than disparate and isolated nuggets of technical debt. This blog extends one I wrote a while ago about the opposite operation of <a href="/post/migrating-multisite-singlesite">extracting sites from within a multisite</a>.</p><p>A while ago, I wrote an article about migrating a site out of a multisite situation into being its own standalone Drupal installation. A site might be separated out of a multisite if its functionality was expanding in scope further than what the platform should reliably be dealing with. Ideally, any shared platform would ensure the majority of functionality included in the codebase was utilised by all sites to minimise the possibility of code bloat.</p><h2 id="what-is-code-bloat"><strong>What is code bloat?</strong></h2><p>Within the Drupal community, many pieces of code exist to connect Drupal sites with external services and increase the overall level of functionality of the site. Due to the architecture of Drupal, code providing functionality and content/configuration in the database, many sites can use the same codebase to run entirely different sites. It’s often a preferable idea to map functionally similar sites to the same codebase to allow one bug fix or patch update to be applied to multiple running websites simultaneously; lowering the development burden and reducing the maintenance time and spend. This should be contrasted with entirely different sites e.g. intranet vs eCommerce. Typically the functional requirements of the two sites would be completely different, leading to less code share and a reduced effectiveness in a shared codebase. In these instances, I typically recommend using separate codebases to contain the functionality and remove the amount of unused code per site.</p><h2 id="how-can-i-make-development-more-efficient"><strong>How can I make development more efficient?</strong></h2><p>If two similar Drupal sites exist in different codebases, it can be a preferable option to allow them to run from a shared codebase. The process of migrating an existing site into a multisite arrangement is reasonably trivial provided the following steps are adhered to.</p><ul><li>Create a directory within the sites directory to house the site specific modules, files and database connection information e.g. sites/mysitename</li><li>Copy across everything from within the old site code tree from sites/default to sites/mysitename.</li><li>Ensure that all modules existing on the old site within sites/all/modules are present on the multisite codebase in some way. N.B. if these modules would benefit other sites on the multisite they should be added to sites/all, if they are custom or bespoke to the site being transferred in they should be moved to sites/mysitename</li><li>If on another server, migrate the database from the old site to the database present on the server where the multisite resides</li><li>Connect to the database by typing mysql -uUSERNAME -pPASSWORD -DDATABASENAME or navigate to the new site directory (sites/mysitename) and type drush sqlc.</li><li>Run the following queries substituting in the correct name for your site directory and only changing path for the custom modules that have been placed in the sites/mysitename/modules directory</li></ul><pre><code class="language-sql">UPDATE system SET filename = REPLACE(filename, 'sites/all/modules', 'sites/mysitename/modules') WHERE name IN ('custom_module', 'mysite_feature', 'migration_module');
UPDATE registry_file rf JOIN registry r ON rf.filename = r.filename SET rf.filename = REPLACE(rf.filename, 'sites/all/modules', 'sites/mysitename/modules') WHERE r.module IN ('custom_module', 'mysite_feature', 'migration_module');
UPDATE registry SET filename = REPLACE(filename, 'sites/all/modules', 'sites/mysitename/modules') WHERE module IN ('custom_module', 'mysite_feature', 'migration_module');</code></pre><ul><li>It's important to run the SQL statements in this order as the second query relies on a JOIN operation that won't work after the third query.</li><li>Clear your cache with drush cc all (At this point you may also need to manually truncate cache and cache_menu and menu_router tables beforehand.)</li><li>Finally change your filesystem path(s) variable in the Drupal settings from sites/default/files to sites/mysitename/files.</li></ul> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Verifying SSL certificates with drupal_http_request ]]></title>
        <description><![CDATA[ Recently I was posed with the question about verifying self-signed SSL
certificates with drupal_http_request()
[https://api.drupal.org/api/drupal/includes%21common.inc/function/drupal_http_request/7]
. The usecase here would be to use private APIs to surface information, secured
with SSL, yet using an internally created ]]></description>
        <link>https://www.adammalone.net/verifying-ssl-certificates-drupalhttprequest/</link>
        <guid isPermaLink="false">5f339f6421b8f9692ae939ca</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 02 Sep 2015 15:18:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1531417666976-ed2bdbeb043b?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>Recently I was posed with the question about verifying self-signed SSL certificates with <a href="https://api.drupal.org/api/drupal/includes%21common.inc/function/drupal_http_request/7?ref=adammalone.net">drupal_http_request()</a>. The usecase here would be to use private APIs to surface information, secured with SSL, yet using an internally created certificate.</p><p>By default, drupal_http_request() does not verify the SSL certificate of sites it connects to; unlike command line tools such as curl. To have an extra layer of oversight, this may be undesirable, however because we're working with Drupal, we have the flexibility to alter this.</p><p>The following comment in <a href="https://api.drupal.org/api/drupal/includes%21common.inc/7?ref=adammalone.net">common.inc</a> provides us with a clue about how certificates may be checked.</p><pre><code class="language-php">// Create a stream with context. Allows verification of a SSL certificate.</code></pre><p>The use of <a href="https://secure.php.net/manual/en/function.stream-socket-client.php?ref=adammalone.net">stream_socket_client()</a> is key here, as it allows the modification of the stream context to pass in the expected certificate. We concatenate the private key, public key and any CA keys (unlikely if self-signed) to form a single pem file to use for verification. Then, we pass the location of this file into the context for drupal_http_request() and only receive information back if the certificates match.</p><pre><code class="language-php">$cert = '/home/website/ssl/example_com.pem';
$context = stream_context_create(array('ssl' =&gt; array('local_cert' =&gt; $cert, 'verify_peer' =&gt; true, 'verify_depth' =&gt; 5, 'allow_self_signed' =&gt; true)));

$request = drupal_http_request('https://example.com/', array('context' =&gt; $context));</code></pre><p>Further reading on <a href="https://secure.php.net/manual/en/context.ssl.php?ref=adammalone.net">setting correct SSL contexts may be found on php.net</a>.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Making Nagios check OpenVPN ]]></title>
        <description><![CDATA[ I&#39;ve been slowly expanding the amount of automation that runs on the servers I
personally maintain. With Puppet [https://puppetlabs.com/] as my configuration
management system I&#39;m able to deploy changes to however many of my servers
quickly and easily. Similarly, if any server dies a ]]></description>
        <link>https://www.adammalone.net/making-nagios-check-openvpn/</link>
        <guid isPermaLink="false">5f339ee421b8f9692ae939a4</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 10 Mar 2015 00:00:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1520869562399-e772f042f422?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>I've been slowly expanding the amount of automation that runs on the servers I personally maintain. With <a href="https://puppetlabs.com/?ref=adammalone.net">Puppet</a> as my configuration management system I'm able to deploy changes to however many of my servers quickly and easily. Similarly, if any server dies a fiery death a new one can be spun up immediately with no data loss.</p><p>To ensure that I'm keeping tabs on the health of the boxes, I run <a href="http://www.nagios.org/?ref=adammalone.net">Nagios</a> on my master server and monitor an ever increasing list of services across my collection. Since I've recently added a puppet controlled VPN to my repertoire, it was only natural that I should want to ensure the <a href="https://openvpn.net/?ref=adammalone.net">OpenVPN</a> process was both online and responsive to connections.</p><p>Since I run OpenVPN on UDP port 1194, all the resources I managed to find online made reference to piping something towards the port via <a href="http://netcat.sourceforge.net/?ref=adammalone.net">netcat</a> or <a href="https://linux.die.net/man/1/nc?ref=adammalone.net">nc</a>. Unfortunately, because OpenVPN is not sensible, binary information needs to be sent rather than a few ASCII characters I can read and understand. Similarly, the response is equally indecipherable.</p><p>Through testing however, I was able to identify that unless I gave OpenVPN a very specific string of binary, it would timeout on me. With this in mind, I was able to use the <a href="https://www.monitoring-plugins.org/doc/man/check_udp.html?ref=adammalone.net">check_udp</a> command that comes with Nagios, and a timeout to verify that OpenVPN was up and responding to VPN requests.</p><p>I could define the check_openvpn command for Nagios like this:</p><pre><code class="language-puppet"># We're not looking for a specific response, here. More that we actually get
# one and not a timeout or no data.
nagios_command { 'check_openvpn':
  ensure       =&gt; present,
  command_line =&gt; '$USER1$/check_udp -H $HOSTADDRESS$ -p $ARG1$ -E -s "$38$01$00$00$00$00$00$00$00" -e "^@^@^@^@^@" -t 10 -M ok'
}</code></pre><p>I could then call that check from within my custom VPN puppet configuration like this:</p><pre><code class="language-puppet"># We're using 1194 for this service and that's the only argument accepted # by the openvpn check.
nagios::service { "check_openvpn_${fqdn}":
  check_command       =&gt; "check_openvpn!1194",
  service_description =&gt; 'openvpn',
}</code></pre> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Fighting back from Drupal hacks ]]></title>
        <description><![CDATA[ The last thing any website owner, developer or administrator wants to hear is
that they&#39;ve been hacked. Whether the cause was the fault of insecure passwords,
problematic file permissions, a vulnerability in the underlying code or the
myriad other potential issues, it&#39;s an undesirable situation to ]]></description>
        <link>https://www.adammalone.net/fighting-back-drupal-hacks/</link>
        <guid isPermaLink="false">5f339d4621b8f9692ae93957</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 19 Nov 2014 23:00:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1561736778-92e52a7769ef?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>The last thing any website owner, developer or administrator wants to hear is that they've been hacked. Whether the cause was the fault of insecure passwords, problematic file permissions, a vulnerability in the underlying code or the myriad other potential issues, it's an undesirable situation to be in.</p><p>When Drupal 7.32 was released and <a href="https://www.drupal.org/SA-CORE-2014-005?ref=adammalone.net">SA-CORE-2014-005</a> (<a href="https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3704&ref=adammalone.net">CVE-2014-3704</a>) was first made public, site operators had as little as <a href="https://www.drupal.org/PSA-2014-003?ref=adammalone.net"><strong><strong>7 hours to upgrade</strong></strong></a> before the hack attempts started.</p><p>With such a short window to upgrade, if websites weren't hosted with <a href="https://www.acquia.com/blog/shields?ref=adammalone.net">managed providers who could step in and provide mitigation and support</a>, options to tackle the vulnerability were drastically limited. These delays were attributable to:</p><ul><li>Large numbers of sites to upgrade with resources available and capable of performing patching stretched thin;</li><li>Low internal skill level requiring professional consultation to assist patching and deployment;</li><li>Developers who do not follow the community or security advisories and were therefore unaware of SA-CORE-2014-005 or its severity;</li><li>Slow moving or heavily bureaucratic organisations hampering agile development; or</li><li>Those who just want to watch the world burn.</li></ul><p>After the fact, resources were released to assist website owners who were not quick enough to upgrade and were concerned, rightly, that they may have fallen victim to a successful hacking attempt. <a href="https://www.drupal.org/node/2365547?ref=adammalone.net">User guides</a>, <a href="https://www.drupal.org/files/project-images/How%20to%20recover%20from%20Drupageddon%2C%20version%208.png?ref=adammalone.net">flow charts</a>, and <a href="https://www.drupal.org/project/drupalgeddon?ref=adammalone.net">code solutions</a> were created with the aim to provide assistance to those in a difficult situation. The underlying message across all of these resources was clear:</p><p><strong><strong>If the site was not upgraded within 7 hours, assume it was hacked and rebuild.</strong></strong></p><p>Recently I was provided with the opportunity to work on non-Acquia hosted sites which had not been upgraded to Drupal 7.32 until <strong><strong>10 days</strong></strong> after the release. I'm hoping that by documenting the steps that I and my colleagues took in response, it will serve as a guide for others in similar positions. Despite the caveat that there is <em><em>always</em></em> the possibility that a hack or exploit was undetected, our aim was to verify the state of these sites, purge any hacks/exploits we found and put the sites on new infrastructure.</p><h2 id="battle-plan"><strong>Battle Plan</strong></h2><p>Every Drupal site comprises of least code, a database, and files. Each of these requires slightly different tactics when checking for potential exploits. Below, I've documented each of the steps taken against the various site components. Attached is a working document we created drawing on advice provided by the Drupal community, our own internal best practices, and tests at Acquia. As we progressed through the steps, each could be checked off as a way to measure our progress. I'd like to emphasize that keeping an offline forensic copy of a potentially hacked site is important and should be the first step in this process.</p><p>Working in conjunction with two other colleagues, we each took one of the code, database and files and rotated through until each had been looked at by three sets of eyes. Following all of these steps, each component was migrated onto new hardware where it happily resides today.</p><h2 id="database"><strong>Database</strong></h2><p>Attempting to audit a database without a recent database backup to roll back to <strong><strong>will</strong></strong> be painful and exhausting. Without a recent backup to use, our example site needed to have its database checked by hand. The difficulty of this process was reduced by the following lucky facts:</p><ul><li>A database backup was available to compare against (albeit not a recent one);</li><li>The database wasn't massive (~20 users &amp; ~2000 nodes/associated field content);</li><li>Content was not added at high rates making the database relatively slow moving.</li></ul><p>While some areas of the database would need to be checked, there were a lot that could be ruled out with just a few command line snippets.</p><h3 id="truncate-all-transient-tables"><strong>Truncate all transient tables</strong></h3><p>Any table that stored non-essential data could be purged to ensure nothing persisted past the audit. These included, but were not limited to, cache tables, logging tables, session stores and even search indices. Anything that could be rebuilt and did not contain canonical data was fair game.</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/_mysql_5.5.36-mariadb-log_boxen_stabled7.png?itok=WYn83zuJ" class="kg-image" alt></figure><p>In our case, using a database editor such as Sequel Pro made this trival, as it was simply a case of selecting tables and truncating them in the UI. However, if you have no UI, or just want to play on the command line then the following may be used. Additional tables may be added to the TRUNCATE array if necessary:</p><pre><code class="language-bash">$ MYSQL_USER='username' \
MYSQL_PW='password' \
MYSQL_DB='db_name' \
TRUNCATE=(`mysql -u$MYSQL_USER -p$MYSQL_PW $MYSQL_DB -s -N -e "SHOW TABLES LIKE 'cache_%';"`) &amp;&amp; \
TRUNCATE+=('sessions' 'watchdog' 'search_dataset' 'search_index' 'flood') &amp;&amp; \
for table in ${TRUNCATE[*]}; do \
echo "Truncating $table"; `mysql -u$MYSQL_USER -p$MYSQL_PW $MYSQL_DB -s -N -e "TRUNCATE $table";`; \
done</code></pre><h3 id="checksum-the-remaining-tables-against-any-backup"><strong>Checksum the remaining tables against any backup</strong></h3><p>Once the tables had been truncated, each table in the database was checked for changes against the known backup. I scoured the internet for something that shows table diffs although could not locate anything conclusive. Diffing table dumps would have been eye-bleedingly-fun, so instead I opted to turn to the trusty <a href="http://www.percona.com/software/percona-toolkit?ref=adammalone.net">percona-toolkit</a>.</p><p>Percona-toolkit is easy enough to install with either brew, yum or another package manager, and it comes with a tool called <a href="http://www.percona.com/doc/percona-toolkit/2.2/pt-table-checksum.html?ref=adammalone.net">pt-table-checksum</a>. Whilst it is meant to be used for ensuring consistency across multi-master and master-slave database setups, it can also be used to compare two databases by using the following command:</p><pre><code class="language-bash">pt-table-checksum --ask-pass -uroot --databases=my_db_backup,my_db_live --nocheck-plan</code></pre><p>This command dumps a bunch of data in the checksums table of the percona database. This data can then be examined to determine whether any tables are different between the backup and the live database. In the example below, the watchdog table is empty, the first workbench_moderation table differs, but all other tables are the same.</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/_mysql_5.5.20_boxen_percona_checksums.png?itok=g9A9jQYQ" class="kg-image" alt></figure><h3 id="dealing-with-the-remaining-tables"><strong>Dealing with the remaining tables</strong></h3><p>It was likely the remaining tables would include the node, field and user tables. Any content change or user log in would, of course, cause the checksums of the tables to differ. At this point, the most thorough way of ensuring database sanitization was to import all tables from the backup not cleared by the above steps and then add in content manually, which was the approach we took.</p><p>However, as this can potentially take a lot of time, an acceptable alternative considered was to:</p><ul><li>Manually check recently added rows (new nodes / new users)</li><li>Run queries to determine where unsafe filter formats are being used and examine them (especially important where PHP filter is enabled)</li><li>Grep a database dump against common potential exploit code (base64, $_POST, eval etc)</li></ul><p>The decision to start again from backup or attempt to clean an existing table is really one of effort versus risk, so it is one that should be made after careful consideration.</p><p>A useful snippet to use for detecting php filter fields in the database is the following. Again, this is not something to rely on 100%, but it provides an excellent method of finding the highest risk areas where exploits may be.</p><pre><code class="language-php">&lt;?php

$formats = array('php_code');
$fields = db_query("SELECT field_name FROM {field_config} WHERE module = 'text'")-&gt;fetchCol();

foreach ($fields as $field) {
  $format_field = $field . '_format';  $field_table = 'field_data_' . $field;
  $result = db_query("SELECT entity_type, entity_id FROM {" . ${field_table} . "} WHERE $format_field IN (:formats) GROUP BY entity_id, entity_type", array(':formats' =&gt; $formats));
  if ($result-&gt;rowCount()) {
    $entity_ids = array();
    foreach ($result as $entity) {
      $output = sprintf('Entity type %s with ID %d contains the field %s with PHP format', $entity-&gt;entity_type, $entity-&gt;entity_id, $field);       echo $output . PHP_EOL;
    }
  }
}</code></pre><h2 id="code"><strong>Code</strong></h2><p>Because a recent copy of the code wasn't available and the code wasn't versioned, the following steps were followed.</p><p>We downloaded a copy of the Drupal core being used by the site at the time of the security risk. This was then be diffed for changes to Drupal core. Any additional patches were matched to the differences in the codebase during this step. It was important to check the potentially insecure codebase against a freshly downloaded copy of the appropriate version of Drupal to minimise the changes requiring investigation.</p><p>After that, contributed modules were diffed against the relevant version of the module to ensure no changes had been made. While the <a href="https://www.drupal.org/project/hacked?ref=adammalone.net">Hacked!</a> module provided some ability to easily check modules for additions or deletions, there was no provision for checking if new files had been added.</p><p>Custom modules and features were the final, and possibly most difficult, piece of the codebase to check. This was because, in some cases, there may be no record of this code anywhere online, particularly where developers were working directly on production. In cases such as these, each line of each custom module would have to be checked to ensure no nasty surprises lurked within. Luckily for us, the example site had kept backups of their custom code so we were able to diff against those backups and confirm all was well.</p><p>The site we were working on was running a Drupal 7.28 codebase so had to be compared to a Drupal 7.28 core. Similarly, each of the modules were outdated, so we needed to ensure we were downloading the correct module version before we ran each check. In cases where a site has a make file to use, then it's simple to create a clone based on that. If the site you want to check does not contain a make file, then all is not lost. You just need a little more hacky drush-fu.</p><p>By placing the following in /path/to/codebase/auditdownload.php and running drush scr auditdownload.php will download all enabled modules of the correct version into /tmp:</p><pre><code class="language-php">&lt;?php

mkdir('/tmp/audit/modules', 0777, true);
$x = module_list();
foreach ($x as $y) {
  $p = system_get_info('module', $y);
  $projects[$p['project']] = $p['version'];
}

foreach ($projects as $project =&gt; $version) {
  if ($project &amp;&amp; $version) {
    echo sprintf('Downloading %s version %s', $project, $version) . PHP_EOL;
    if ($project === 'drupal') {
      system("drush dl --destination=/tmp/audit $project-$version");   
    }
    else {
      system("drush dl --destination=/tmp/audit/modules $project-$version");
    }
  }
}</code></pre><p>The diff command can then be used to show any discrepancies:</p><pre><code class="language-bash">$ diff -rq /tmp/audit/drupal-7.28/ /path/to/codebase
Files /tmp/audit/drupal-7.28/.htaccess and /path/to/codebase/.htaccess differ

$ diff -rq /tmp/audit/modules/ /path/to/codebase/path/to/modules
Only in /path/to/codebase/path/to/modules: old_ckeditor_version
Only in /path/to/codebase/path/to/modules: allmodules.zip
Only in /path/to/codebase/: files</code></pre><h2 id="files"><strong>Files</strong></h2><p>Where a recent backup of the files directory isn't available, running the following commands will do basic checking to ensure there aren't any PHP files in the directory.</p><pre><code class="language-bash">$ find sites/adammalone/files/ -iname "*.php"
$ find sites/adammalone/files/ -type f -exec file {} \; | grep -i PHP</code></pre><p>Other extensions and filetypes can be checked by altering the command, ie for Windows executables:</p><pre><code class="language-bash">$ find sites/adammalone/files/ -iname "*.exe"
$ find sites/adammalone/files/ -type f -exec file {} \; | grep -i executable</code></pre><p>Additionally, the .htaccess for files directories should include the following section to remove the attack surface in future by manually specifying the handler.</p><pre><code class="language-apacheconf">&lt;Files *&gt;
  # Override the handler again if we're run later in the evaluation list.   SetHandler Drupal_Security_Do_Not_Remove_See_SA_2013_003
&lt;/Files&gt;</code></pre><p>As an additional step, an antivirus can be run over the files directory to ensure nothing exists there that looks suspicious. ClamAV was used in this instance and is simple to install, download the latest virus definitions and run. I found the following options to be favourable:</p><pre><code class="language-bash">clamscan -irz --follow-dir-symlinks=2 --follow-file-symlinks=2</code></pre><p>At this point, our work was complete and the site was restored.</p><p>In summary, running through these checks was a lengthy process that provided me with a far more complete understanding of the available attack surfaces once a Drupal site is compromised. While we were only dealing with small sites, a larger site would have required a lot more effort and produced far more tears.</p><p>While no one can predict when vulnerabilities will surface, the fallout (and tears) of these security vulnerabilities can be reduced with solid, regular backups, keeping code in VCS and ensuring servers are hardened. Additionally, in times of security crisis, another obvious alternative to implementing this yourself is to utilise the services of a managed hosting provider, which can provide you support and timely assistance.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Mounting Acquia locally with sshfs ]]></title>
        <description><![CDATA[ One of the things that I&#39;ve been working on recently as part of my MBOs with
Acquia is related to learning and teaching Drupal 8. My latest self enforced
task it to port the SimpleSAMLphp Authentication
[https://www.drupal.org/project/simplesamlphp_auth] module and create a new ]]></description>
        <link>https://www.adammalone.net/mounting-acquia-locally-sshfs/</link>
        <guid isPermaLink="false">5f339cb821b8f9692ae93937</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 02 Nov 2014 09:40:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1567540227188-f27fb2e2babd?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>One of the things that I've been working on recently as part of my MBOs with Acquia is related to learning and teaching Drupal 8. My latest self enforced task it to port the <a href="https://www.drupal.org/project/simplesamlphp_auth?ref=adammalone.net">SimpleSAMLphp Authentication</a> module and create a new and shiny D8 version. After migrating most of the routing, configuration and form structures I started to work on the business logic and get federated login working.</p><p>I've been using the free Identity Provider (IdP) provided by <a href="https://openidp.feide.no/?ref=adammalone.net">Feide</a> to remove the pain of setting up my own IdP. The only caveat to this is that I needed to ensure my Service Provider (SP) is accessible from the general web; not possible when running Drupal 8 on my laptop. The quickest way for me to get a Drupal 8 ready platform that would be accessible online was to spin up a quick <a href="https://www.acquia.com/free?ref=adammalone.net">Acquia Freetier</a> site and work from there. Unfortunately I would then lose the use of PHPStorm and all the benefits a solid IDE brings to Drupal 8 development.</p><p><strong><strong>Enter sshfs</strong></strong></p><pre><code class="language-bash"># OSX
brew install sshfs

# RHEL
yum install fuse-sshfs

# Debian/Ubuntu
apt-get install sshfs</code></pre><p>Create a host in ~/.ssh/config to make life simple</p><pre><code class="language-bash">Host myfreetiersite
  Hostname free-xxxx.devcloud.hosting.acquia.com
  User &lt;acquia username&gt;
  Port 22
  IdentityFile ~/.ssh/id_rsa</code></pre><p>Create the mount point (I'm using /tmp although anywhere owned by the user is acceptable)</p><pre><code class="language-bash">mkdir /tmp/freetier</code></pre><p>Mount Acquia freetier with the correct path and flags (Assuming livedev has been enabled)</p><pre><code class="language-bash">sshfs -o reconnect,follow_symlinks,compression=yes,volname="Acquia Freetier" myfreetiersite:dev/livedev/docroot/ /tmp/freetier/</code></pre><p>Move on with your life when you're done!</p><pre><code class="language-bash">umount /tmp/freetier/</code></pre> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Playing with augeas for fun and profit ]]></title>
        <description><![CDATA[ Contrary to what Wikipedia says [https://en.wikipedia.org/wiki/Augeas], the
Augeas I&#39;m using isn&#39;t at all related to the 5th labour of Hercules. Rather,
it&#39;s a configuration editing tool [http://augeas.net/index.html] and Puppet
resource type [https://docs.puppetlabs.com/ ]]></description>
        <link>https://www.adammalone.net/playing-augeas-fun-and-profit/</link>
        <guid isPermaLink="false">5f339bb621b8f9692ae93901</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 11 Oct 2014 08:00:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1509317745287-0b91c9275dd2?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>Contrary to what <a href="https://en.wikipedia.org/wiki/Augeas?ref=adammalone.net">Wikipedia says</a>, the Augeas I'm using isn't at all related to the 5th labour of Hercules. Rather, it's a <a href="http://augeas.net/index.html?ref=adammalone.net">configuration editing tool</a> and <a href="https://docs.puppetlabs.com/guides/augeas.html?ref=adammalone.net">Puppet resource type</a> used primarily to alter and control config files.</p><p>After recently adding control of /root/.my.cnf to the manifests managing all my servers, I needed to look into something which could alter that configuration file without blowing away some of the other lines in there which weren't centrally managed - like the MySQL root password.</p><p>My /root/.my.cnf is structured as follows and I wanted to alter the port to 3306.</p><pre><code class="language-apacheconf">[client]
user=root
password=supersekretpassword
port=3306
socket=/var/run/mysqld/mysqld.sock</code></pre><p>Augeas allows us to alter config from this file and rewrite it without clobbering any of the other details. Whilst there's already plenty of information about using Augeas on the <a href="http://augeas.net/index.html?ref=adammalone.net">documentation site</a> and the <a href="https://docs.puppetlabs.com/references/latest/type.html?ref=adammalone.net#augeas">puppet type reference page</a>, there wasn't anything that immediately stood out to me when I had difficulty with the /root/.my.cnf file.</p><p>One of the key things to realise when dealing with Augeas is that it has preset configuration file types loaded into it. These preset config file styles are known as '<a href="http://augeas.net/docs/lenses.html?ref=adammalone.net">lenses</a>' and each will only work with a set list of files at specific file locations. By default, the lens holding the information for MySQL config files (MySQL.lns) only acknowledges the existence of the following file locations:</p><ul><li>/etc/my.cnf</li><li>/etc/mysql/conf.d/*.cnf</li><li>/etc/mysql/my.cnf</li></ul><p>Before we can make the augeas type alter the /root/.my.cnf file we have to register it with the MySQL.lns lens.</p><h3 id="understanding-augeas-with-augtool"><strong>Understanding Augeas with augtool</strong></h3><p>From the command line, we can use <a href="https://linux.die.net/man/1/augtool?ref=adammalone.net">augtool</a> to investigate what Augeas is able to see and the files registered to each lens type. Using augtool is a great way to start understanding Augeas, the config tree and the ability to alter config files. To register our aforementioned /root/.my.cnf and alter the port number using augtool we may use the following steps.</p><pre><code class="language-bash"># Load augtool without any files.
$ augtool --noload

# Add in the /root/.my.cnf file as a configuration
# file managed by the MySQL.lns lens and ensure
# we don't clobber any of the other include files by
# setting the include index as 1 greater than the last
# existing.
augtool&gt; set /augeas/load/MySQL/incl[last()+1] "/root/.my.cnf"

# Finally load the files.
augtool&gt; load

# Print the port directive from the [client]
# section in the /root/.my.cnf file
augtool&gt; print /files/root/.my.cnf/target[ . = "client"]/port /files/root/.my.cnf/target/port = "3306"

# Set the port to 33306
augtool&gt; set /files/root/.my.cnf/target[ . = "client"]/port 33306

# Save the changes back to /root/.my.cnf
augtool&gt; save
Saved 1 file(s)</code></pre><h3 id="implementing-root-my-cnf-changes-with-augeas-in-puppet"><strong>Implementing /root/.my.cnf changes with Augeas in Puppet</strong></h3><p>Once you start to understand how Augeas uses lenses for configuration file types and the way a configuration file becomes a tree, it becomes trivial to alter any config file in puppet with the augeas resource. The following snippet will ensure that, as above, the port directive in the [client] section of the /root/.my.cnf file is set to 33306. Without specifying the correct lens however, this won't work as Augeas doesn't register /root/.my.cnf to a lens by default so ignores it.</p><pre><code class="language-puppet">augeas {'/root/.my.cnf port change':
  lens =&gt; 'MySQL.lns',
  incl =&gt; '/root/.my.cnf',
  changes =&gt; [
    "set target[ . = 'client']/port 33306"
  ],
}</code></pre> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Becoming Acquia Backend Specialist Certified ]]></title>
        <description><![CDATA[ For the most recent Acquia Professional Services All Hands in Boston I took the
Acquia Backend Specialist certification and passed! Following the helpful blog
articles about the general certification from Tanay Sai
[http://www.tanay.co.in/blog/cracking-acquia-drupal-certification.html] and 
Webchick [http://webchick.net/node/125], I felt it ]]></description>
        <link>https://www.adammalone.net/becoming-acquia-backend-specialist-certified/</link>
        <guid isPermaLink="false">5f339b6e21b8f9692ae938f0</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 02 Oct 2014 23:40:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1513151233558-d860c5398176?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>For the most recent Acquia Professional Services <em><em>All Hands</em></em> in Boston I took the Acquia Backend Specialist certification and passed! Following the helpful blog articles about the general certification from <a href="http://www.tanay.co.in/blog/cracking-acquia-drupal-certification.html?ref=adammalone.net">Tanay Sai</a> and <a href="http://webchick.net/node/125?ref=adammalone.net">Webchick</a>, I felt it was only responsible to speak about my experience during the backend exam.</p><p>One of the key differentiators between those of us involved in Drupal development is whether we err more towards the backend or the frontend. There are of course those who excel in both but generally I've found people to specialise more in either module or theme development. Because I <em><em>suck</em></em> at theming (I mean have you seen my website?), I consider myself more of a backend specialist. For me, the backend begins with a bare server (and Red Hat based OS installed) and ends when Drupal starts to create markup for a beautiful site. In this exam however, backend doesn't delve further than the application layer.</p><p>The examination experience was a little different to the generic exam which I took at home. Instead of being invigilated by someone spying on my through my webcam, I was observed by the Acquia Learning Services team to ensure I didn't crack open a copy of 'Drupal 7 Module Development' when nobody was looking. I'd recommend this method of taking the test over the slightly more intrusive webcam watching. If you get a chance to attend a DrupalCon and you're offered the option of becoming certified there; do so!</p><h3 id="sections"><strong>Sections</strong></h3><p>Each of the sections being looked at for applicants is available on the <a href="https://www.acquia.com/customer-success/learning-services/acquia-certified-developer-back-end-specialist-exam-blueprint?ref=adammalone.net">blueprint page for the backend exam</a>. The key areas to look at for the test itself are:</p><ul><li>Drupal core API</li><li>Database abstraction layer</li><li>Debug code and troubleshooting</li><li>Leveraging community</li><li>Performance</li><li>Security</li><li>Theme integration</li><li>Fundamental web development</li></ul><p>Each of these sections carries a certain weighting which directly relates to the number of questions that'll be asked about the topics. At 30%, a sound knowledge of the Drupal core API is definitely something you'll need to pass the certification.</p><h3 id="preparation"><strong>Preparation</strong></h3><p>Based on the sections being examined and the percentages attributed to each section, I'd recommend studying up at least on the areas you identify yourself as weak on before trying. For me, I revised some of the intricacies of the Database API within Drupal. Whilst I often rely on api.drupal.org during development when I get unstuck, this test prohibits access to external resources. Being able to commit a few things to memory quickly just before the test allowed me to accurately answer questions I may not have been able to otherwise.</p><p>I also read up on some of the changes that PHP <a href="https://php.net/manual/en/migration54.new-features.php?ref=adammalone.net">5.4</a> and <a href="https://php.net/manual/en/migration55.new-features.php?ref=adammalone.net">5.5</a> brought in over the now <a href="https://php.net/eol.php?ref=adammalone.net">EOL 5.3</a>. Namespacing, traits, shorthand array syntax ($array = [];) and general OOP practices came up for me so do ensure you're aware of the new PHP hotness.</p><h3 id="tips"><strong>Tips</strong></h3><ul><li>Since the questions are either multiple choice or multiple response (select one or many), then for those questions you're unsure of there's no penalty in taking a good guess;</li><li>Blast through the questions you're confident of and mark any questions you're unsure of so you can come back at the end with a fresh frame of mind;</li><li>At 90 minutes and 60 questions you should be spending around 90 seconds on each question. If you find yourself staying on a question for over that, mark it and move on; and</li><li>Settle in for the 90 minute stretch when you sit down. If that means getting water, food or taking a bio-break prior then do that.</li></ul><h3 id="why-you-should-take-the-certification"><strong>Why you should take the certification</strong></h3><p>I don't want to repeat the already well written <a href="https://www.acquia.com/customer-success/learning-services/acquia-certification-program-overview?ref=adammalone.net">discussion about the merits of Drupal certification</a>; I will however try to quantify why <em><em>I</em></em> took the certification. Being in the Drupal community, around amazing developers and those learning alike, the most I have to go on about how experienced someone is comes either from the number of commits on their Drupal.org profile or from speaking with them or their peers.</p><p>This same experience must hold true for people who've just met me and want to know where I stand in the community. Whether it's clients, other community members or even people who listen to me speak at conferences, none have an accurate representation of me as a Drupalist. With a general Drupal certification and now a backend specialist certification there exists a qualitative measure of my knowledge.</p><h3 id="what-i-got"><strong>What I got</strong></h3><p>So I passed the test, but what did I get for each section?</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/test_results.png?itok=7QDd2ynK" class="kg-image" alt></figure><p>I could certainly improve my scores in some of the sections and I should definitely touch up my understanding of how to leverage the community. That being said, my knowledge is improving all the time just by being a part of the Professional Services team and if you want to become a part of this amazing team then <a href="/contact-me">get in touch</a> because we're <strong><strong>always hiring</strong></strong>.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Run a Hackathon, inspire passion for Drupal ]]></title>
        <description><![CDATA[ With a beta release of Drupal 8 just around the corner, and a library of modules that need a stable port (or at least a beta release) on release day, now is the time to get your learning underway and dive in.

A brief look at this chart from the ]]></description>
        <link>https://www.adammalone.net/run-hackathon-inspire-passion-drupal/</link>
        <guid isPermaLink="false">5f339b1f21b8f9692ae938dd</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 14 Jul 2014 20:30:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1504384308090-c894fdcc538d?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>With a beta release of Drupal 8 just around the corner, and a library of modules that need a stable port (or at least a beta release) on release day, now is the time to get your learning underway and dive in.</p><p>A brief look at this chart from the Drupal 7 development cycle makes the effect of available contrib on core uptake really clear. If we as a community can ensure contrib is ready when Drupal 8 is released, then everyone has a greater incentive to jump in!</p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/drupal_8_module_workshop.pdf_page_3_of_15_-2.png" class="kg-image" alt loading="lazy"></figure><p>One of my favourite ways of learning and rapidly spreading ideas to many is to run a Hackathon. The amazing thing about Hackathons in general is twofold:</p><ul><li>Meeting with like-minded and passionate people who share a genuine desire to further themselves and Drupal;</li><li>The sheer concentrated brainpower in one room, conducive to leaps in development.</li></ul><p>What I experienced at the CI&amp;T Hackathon in Campinas, Brazil was another great example of these.</p><p>Starting early on Thursday 26th June, the gathered attendees split off into six groups, each headed by a more experienced sensei. The aim of this Hackathon was to both increase the CI&amp;T developers' grasp of Drupal 8 and get some modules ported while we were at it.</p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/square_thumbnail/public/adam_hernani.png?itok=qmyGFqhQ" class="kg-image" alt loading="lazy"></figure><p>For the first hour, both myself and fellow Acquia technical consultant, <a href="https://www.drupal.org/user/448086?ref=adammalone.net">Hernâni Borges de Freitas</a>, presented an overview of Drupal 8. Not only are there new features to learn about, but also differences from Drupal 7 that are important to know about as module developers. With a collection of homegrown examples and code from the pants module, we were able to cover info.yml files, routing, configuration management, extending classes in Drupal 8 to leverage core, debugging tips and entities.</p><p>With this brief injection of Drupal 8, the attendees set about creating teams and deciding on their targeted contrib module to port. Each team was headed up by a senior CI&amp;T developer to act as sensei and guide the learning of the team. Guiding the senseis was the job of Hernani and I. Acting as floating resources, we were on hand to unblock issues and provide both information and inspiration!</p><p>After consulting with each team and advising on module choice the teams selected the following to port from Drupal 7 to Drupal 8:</p><ul><li><a href="https://www.drupal.org/project/seckit?ref=adammalone.net">Security Kit</a></li><li><a href="https://www.drupal.org/project/languageicons?ref=adammalone.net">Language Icons</a></li><li><a href="https://www.drupal.org/project/commentrss?ref=adammalone.net">Comment RSS</a></li><li><a href="https://www.drupal.org/project/m4032404?ref=adammalone.net">403 to 404</a></li><li><a href="https://www.drupal.org/project/flood_control?ref=adammalone.net">Flood Control</a></li><li><a href="https://www.drupal.org/project/filter_protocols?ref=adammalone.net">Filter Allowed Protocols</a></li><li><a href="https://www.drupal.org/project/login_security?ref=adammalone.net">Login Security</a></li></ul><p>The choice of module was carefully considered and each selection adhered to a set of guidelines the Acquia team provided. Our end goals as Hackathon mentors were that all attendees helped port a module and all attendees learned without feeling overwhelmed. We recommended the teams choose modules that all members had used previously; the familiarity would save valuable development time by removing the need to work out how the module worked.</p><p>Additionally, while ambitiousness is by no means a bad quality, the teams only had around 12 hours to fully port their chosen modules. With this in mind, we recommended against larger projects and advised more for smaller, yet diverse modules to give each team exposure to as many new Drupal 8 features as possible.</p><p>With the modules selected, each team started to discuss module architecture, identifying tasks and key features that would be involved in the port. Prior to the Hackathon, the CI&amp;T team had introduced us to a number of the different development methodologies they use day to day on projects. It was fascinating to watch each team continue to follow these methodologies throughout the sprint.</p><p>The main techniques I observed were Agile and Dojo. For those unfamiliar, the Dojo technique limits each team to one computer and 15 minute coding stints. Each team member spends their allotted time at the keyboard with the rest of the team either providing knowledge or absorbing information; this ensures potential distractions are reduced and allows for kinesthetic learning where each team member gets a go rather than be limited to just watching.</p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/large/public/hackathon_team_mentoring.png?itok=zdr9ZEIH" class="kg-image" alt loading="lazy"></figure><p>At the end of a day and a half of near constant hacking, the teams had managed to port seven modules to Drupal 8. To do so in such limited time is impressive by anyone’s standards and puts a good dent in the number of projects that needed the upgrade. After convening to discuss efforts we were proud to announce team’s Seckit and Login Security as joint winners. Not only had they teamed up on important contributed modules, but they’d done it in an inclusive manner where everyone learned. Overall though, we were really proud of the effort everyone attending put in.</p><p>As a Drupal 8 advocate, it’s my aim to spread the love of Drupal 8 to as many in the community as I can wherever in the world they are! Inline with this, it’s my hope that by empowering the CI&amp;T Brazil team with Drupal 8 knowledge, they’ll be able to take on the roles of mentors themselves and spread that knowledge further.</p><p>In addition to the noise I make about the importance of getting an early edge on the latest and greatest Drupal, the <a href="https://groups.drupal.org/node/430243?ref=adammalone.net">D8CX initiative</a> headed up by Lee Rowlands (<a href="https://drupal.org/u/larowlan?ref=adammalone.net">larowlan</a>) is something to follow. A cousin of the D7CX initiative, D8CX aims to provide promotion and assistance to port contributed modules.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Why Pound is awesome in front of Varnish ]]></title>
        <description><![CDATA[ We all know Varnish is awesome. I went as far as presenting a topic on Varnish then writing about it. This is a known fact.

However, what happens to all that caching goodness when you want to run your entire site over SSL? Out of the box, Varnish doesn&#39; ]]></description>
        <link>https://www.adammalone.net/why-pound-awesome-front-varnish/</link>
        <guid isPermaLink="false">5f3399ea21b8f9692ae93899</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 21 May 2014 05:30:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1518365050014-70fe7232897f?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>We all know <a href="https://www.varnish-cache.org/?ref=adammalone.net">Varnish</a> is awesome. I went as far as <a href="https://www.adammalone.net/post/varnish-beginners">presenting a topic on Varnish</a> then <a href="https://www.adammalone.net/post/explaining-varnish-beginners">writing about it</a>. This is a known fact.</p><p>However, what happens to all that caching goodness when you want to run your entire site over SSL? Out of the box, Varnish doesn't support it. While I've heard some mention that not supporting SSL is an oversight, <a href="https://www.varnish-cache.org/docs/trunk/phk/ssl.html?ref=adammalone.net">there exists some very sound reasoning for why <em><em>not</em></em></a>.</p><p>So how do people terminate SSL?</p><ul><li>Nginx - <a href="https://docs.acquia.com/cloud-platform/arch/?ref=adammalone.net">How Acquia, my employer does it</a></li><li>Stunnel - <a href="https://www.stunnel.org/index.html?ref=adammalone.net">Software I'm fond of</a></li><li>Pound - My preferred method</li><li>Apache - <a href="https://httpd.apache.org/docs/2.2/mod/mod_ssl.html?ref=adammalone.net">mod_ssl</a></li></ul><h2 id="what-is-pound"><strong>What is Pound?</strong></h2><p>Without copying exactly what's on the <a href="http://www.apsis.ch/pound?ref=adammalone.net">Pound documentation</a>, or the <a href="https://en.wikipedia.org/wiki/Pound_(networking)?ref=adammalone.net">Wikipedia entry about Pound</a>, it's essentially a reverse proxy, SSL terminator and load balancer but <strong><strong>NOT</strong></strong> a webserver. It's small, easy enough to install and has minimal configuration. Stunnel is similarly simple, but since I have quite extensive experience using Stunnel, I decided to learn something new.</p><p>On my load balancing servers, Pound listens on port 443 and Varnish listens on port 80. When traffic comes in on port 443, it hits Pound, gets decrypted using my server certificate and then gets passed to Varnish on port 80. By putting <strong><strong>all</strong></strong> traffic through Varnish, I'm able to take advantage of its caching ability for both HTTP and HTTPS traffic.</p><p>It's <em><em>almost</em></em>, that simple. I had to make some minor changes to my <a href="https://www.varnish-cache.org/docs/3.0/reference/vcl.html?ref=adammalone.net">VCL</a> to receive and cache mixed mode traffic. Prior to these changes, I would sometimes deliver resources using the HTTP schema to pages delivered over HTTPS. This had the understandable effect of causing my browser to complain about insecure resources.</p><h2 id="getting-varnish-and-pound-to-play-nicely"><strong>Getting Varnish and Pound to play nicely</strong></h2><p>Realising that we need to handle HTTP/HTTPS traffic differently in Varnish, even though it all comes in on port 80, I decided to use a separate cache hash key for each. Varnish uses hashes of the URI as a key to store and look up data by. My VCL implements the <a href="https://www.varnish-software.com/static/book/VCL_functions.html?ref=adammalone.net#vcl-vcl-hash">vcl_hash</a> subroutine to detect HTTPS traffic and alter the hash key. We add a header in Pound to tell Varnish that the traffic came in over SSL and then watch the magic happen.</p><p><strong><strong>pound.cfg</strong></strong></p><pre><code class="language-apacheconf">ListenHTTPS
  Address 0.0.0.0
  Port 443
  HeadRemove "X-Forwarded-Proto"
  AddHeader "X-Forwarded-Proto: https"
  Cert "/etc/ssl/certs/adammalone.net.pem"
End

Service
  HeadRequire "Host:.*adammalone.net.*"
    Backend Address 127.0.0.1
    Port 80
  End
End</code></pre><p><strong>default.vcl</strong></p><pre><code class="language-apacheconf">sub vcl_hash {
  hash_data(req.url);
  if (req.http.host) {
    hash_data(req.http.host);
  } else {
    hash_data(server.ip);
  }
  # Use special internal SSL hash for https content
  # X-Forwarded-Proto is set to https by Pound
  if (req.http.X-Forwarded-Proto ~ "https") {
    hash_data(req.http.X-Forwarded-Proto);
  }
  return (hash);
}</code></pre><p>The <a href="https://www.varnish-cache.org/docs/trunk/reference/vcl.html?ref=adammalone.net#functions">hash_data</a> function allows us to add further information to the hash. By adding 'https' to the host and uri information, we're altering the hash in such a way that it is different from just the host + uri that an http request would use.</p><p>I've also attached a downloadable copy of my full Pound config and the puppet manifest that generates it for people who are interested in replicating this functionality. I'm using my Pound puppet class located at <a href="https://github.com/typhonius/puppet-pound?ref=adammalone.net">typhonius/puppet-pound</a>, a fork of <a href="https://github.com/mrintegrity/puppet-pound?ref=adammalone.net">mrintegrity/puppet-pound</a>.</p><h2 id="drupal-configuration"><strong>Drupal configuration</strong></h2><p>The final thing to do is to inform Drupal it needs to be in SSL mixed mode and to enter a small snippet in my settings.php so it can be turned on or off based on the incoming request. If Varnish is running on the same server as your Drupal installation, you'll need to replace www.xxx.yyy.zzz with 127.0.0.1. Otherwise it'll be the IP of your load balancing server.</p><pre><code class="language-php">// Varnish Settings
$conf['reverse_proxy'] = TRUE;
$conf['reverse_proxy_addresses'] = array('www.xxx.yyy.zzz'); $conf['reverse_proxy_header'] = 'HTTP_X_FORWARDED_FOR'; $conf['page_cache_invoke_hooks'] = FALSE;
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) &amp;&amp; $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') {
  $_SERVER['HTTPS'] = 'on';
}</code></pre><p>This is how I allow SSL through Varnish, if you do it differently, add a comment!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ DrupalGov Canberra 2014 ]]></title>
        <description><![CDATA[ To those who know me, the past couple of weeks in my schedule have been fairly
jam packed with DrupalGov items. Like an aperture window, each part of the
pre-planning stage relies on its neighbour before carefully interlocking in
place. DrupalGov Canberra, Asia Pacific and Australia’s only Government centric ]]></description>
        <link>https://www.adammalone.net/drupalgov-canberra-2014/</link>
        <guid isPermaLink="false">5f33999721b8f9692ae93886</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 12 May 2014 14:05:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1592133384951-d0316dd663c1?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>To those who know me, the past couple of weeks in my schedule have been fairly jam packed with DrupalGov items. Like an aperture window, each part of the pre-planning stage relies on its neighbour before carefully interlocking in place. DrupalGov Canberra, Asia Pacific and Australia’s only Government centric Drupal event, is coming for another year.</p><h3 id="about-the-conference"><strong>About the conference</strong></h3><p>Right now, the most up to date conference details are available on the <a href="http://www.drupalact.org.au/events/drupalgov-canberra-2014?ref=adammalone.net">DrupalGov conference pages</a> of the <a href="http://www.drupalact.org.au/?ref=adammalone.net">DrupalACT website</a>. The show begins from <strong><strong>9am</strong></strong> on the <strong><strong>22nd August</strong></strong> at the <strong><strong>National Museum of Australia</strong></strong> with sessions running throughout the day and a strong likelihood of further events in the evening.</p><p>We'll be running <a href="http://www.drupalact.org.au/submit-session?ref=adammalone.net">three session tracks</a> with the hope of appealing to all registrants at the event.</p><p>For those into hardcore coding, theming and DevOps, we'll have a track available to discuss the latest Drupal techniques, continuous integration with Drupal and third party technology interaction.</p><p>If you run a department or a development team in Government, then I'm hoping the Gov track will provide insights into keeping an agile workflow, managing a multitude of shareholders, starting to use open source or advice on advocating for the cloud from a decision maker standpoint.</p><p>Alternatively, for those wishing to improve the architecture of their existing sites, future planned builds or just looking to pick up on the techniques of other site builders and architects we're running the Case Study track. Covering entire builds, content workflow and site management, there should be enough information provided for all seeking it.</p><p>With these tracks and the ability to pick and choose the sessions attended throughout the day, it's my strong hope that we're going to cover a huge amount of content applicable to Drupal in Government.</p><h3 id="why-the-conference"><strong>Why the conference?</strong></h3><p>Even with DrupalGov Canberra 2014 as a sequel to the successful <a href="http://www.drupalact.org.au/events/drupalgov-canberra-2013?ref=adammalone.net">DrupalGov Canberra 2013</a>, it's likely that an event would be necessary anyway. Only late last week did I hear news of John Sheridan's trailblazing <a href="http://www.finance.gov.au/blog/2014/05/07/seeking-industry-comment-on-govcms-draft-statement-of-requirements/?ref=adammalone.net">blog post on the departmental blog</a>. This sparked off further comments on both <a href="http://delimiter.com.au/2014/05/08/govt-shift-450-sites-drupal-cloud/?ref=adammalone.net">The Delimiter</a> and <a href="http://www.computerworld.com.au/article/544726/australian_government_likely_standardise_drupal/?ref=adammalone.net">Computer World</a> that provide additional commentary and analysis of the decision.</p><p>Long story short, Drupal and cloud are considered with high regard as an open source solution to Governmental needs for a CMS.</p><p>This is exciting for me for three main reasons:</p><ul><li>I'm a Drupal advocate/developer with a passion for open source</li><li>I work for a <a href="https://www.acquia.com/?ref=adammalone.net">managed cloud hosting company</a></li><li>I currently live in Canberra</li></ul><p>These three reasons combined mean that although I am not directly affected by a decision to migrate sites to a <em><em>GovCMS</em></em>​, many around me will be. Ensuring we have both a local Drupal user group and a best in class event to champion the use of Drupal within Government will provide those transitioning with more support as they get up and running.</p><p>Quite often around new technologies there exists a certain degree of <a href="https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt?ref=adammalone.net">fear</a>; there's fear of change, the unknown and of obsolescence. While Drupal has been around for <a href="https://drupal.org/node/769?ref=adammalone.net">well over 10 years</a>, it's still relatively new within Government circles, traditionally the bastion of proprietary and closed source. The community driving the development, use, training and support of Drupal is vibrant and active. Looking at an <a href="http://www.drupical.com/?ref=adammalone.net">aggregate calendar</a> shows 80 unique events spread over 6 continents globally with Drupal as the topic of discussion, this includes <a href="https://www.meetup.com/DrupalACT/events/180405802/?ref=adammalone.net">one such event in Canberra</a>. The community exists, in part, to reduce the barrier to entry and inform in order to reduce fear.</p><p>One of the mottos with Drupal is:</p><blockquote>Come for the code, stay for the community.</blockquote><p>This is something I've found true, at least for me. The code was great, drew me in and allowed me to write code of my own. But the real thing that prompted me to stay, pushed me to develop more and allowed my passion for Drupal and open source to thrive was the community and the sheer force of effort that I observe others put into it.</p><p>So in summary. I want DrupalGov Canberra 2014 to inform those unfamiliar with Drupal, demonstrate the community driving Drupal and provide that same push I felt when I started to Government and private sector alike.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ PRD suck ]]></title>
        <description><![CDATA[ For anyone who is in the process of renting a property in Canberra and are
looking at properties advertised by PRDnationwide
[http://www.prdcanberracentral.com.au/] I&#39;d advise a little caution prior to
signing the contract and finalising the deal.

Moving In
In around November 2012 I started ]]></description>
        <link>https://www.adammalone.net/prd-suck/</link>
        <guid isPermaLink="false">5f33992e21b8f9692ae93873</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 02 May 2014 10:10:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1464082354059-27db6ce50048?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>For anyone who is in the process of renting a property in Canberra and are looking at properties advertised by <a href="http://www.prdcanberracentral.com.au/?ref=adammalone.net" rel="nofollow">PRDnationwide</a> I'd advise a little caution prior to signing the contract and finalising the deal.</p><h3 id="moving-in"><strong>Moving In</strong></h3><p>In around November 2012 I started looking for a new place to rent so naturally turned to the relevant sources to find an appropriate place to live. On the face of it, all the property management companies look the same. Young, smartly dressed, bright eyed individuals filling our heads with imagery of wondrous homes that would be a joy to live in. With each viewing and each separate company the property managers followed the same format:</p><ul><li><strong><strong>Any</strong></strong> problems would be fixed prior to moving in (chips/cracks/stains);</li><li>There are numerous other people in queue so I had better move fast;</li><li>The agent has had nothing but good reports from everyone else in the area.</li></ul><p>Combine that with the feelings of inadequacy bestowed upon us by the property manager whilst talking to us and you have yourself the recipe for a property manager! In my head I'd tell myself that they were no different to I but for some reason <em><em>most</em></em> of PRD's property managers of the time appeared to think themselves on a higher echelon than the average human being.</p><p>Whilst moving in, my housemates and I learned a little more about the history of the property, including but not limited to:</p><ul><li>Previous tenants (Canberra Raiders) trashing the place;</li><li>Other tenants skipping rent and fleeing the ACT;</li><li>This was meant to be an investment property followed by a retirement home for the landlord;</li><li>The landlord wanted no groups - something I was able to get past with a little talking</li></ul><p>Since we were conscientious tenants all the paperwork was filled in and supplied to the property management company; PRD.</p><h3 id="living-there"><strong>Living There</strong></h3><p>For the most part, I had an enjoyable tenancy although a few things struck me as odd about the way PRD operated for us. It was my understanding that a property manager was the point of contact between the tenant and the agent. Alas, we experienced five different managers over the tenancy. Couple this with a lack of communication between them and that's five times things have to be re-explained. A number of repairs and general tasks promised upon entering the tenancy were not completed after the year mark and repairs completed were done by the landlord after direct communication with us.</p><p>A key to the mailbox was not provided for a number of weeks and with that the only remedy was a phone call to the PRD Canberra CEO's mobile.</p><h3 id="moving-out"><strong>Moving Out</strong></h3><p>The move out procedure was fairly organised from our end. The place was cleaned, carpets steamed and inspected. PRD on the other hand, didn't provide us with much help or information without heavy prompting and I'm not they ever knew the exact dates of departure.</p><h3 id="acat"><strong>ACAT</strong></h3><p>​After being verbally informed we would receive 100% of the bond back after our final inspection, we considered that the end of an unpleasant chapter in renting history. However, as I came to realise, it's really important to get written confirmation as we were then sent an invoice for $2590 (coincidentally $10 short of the full bond amount). This was not acceptable for my co-tenants and I which caused us to take the matter to the <a href="http://www.acat.act.gov.au/?ref=adammalone.net">ACT Civil and Administrative Tribunal</a> in order to resolve the matter. I would strongly advise all tenants in similar positions to consider this a highly viable option. It turns out that rental law in this country, and the ACT is extremely favourable towards tenants, provided there is reasonable evidence to back up claims.</p><p>After providing a number of emails and documents curated during the course of the lease, the majority of the issues were turned over to PRD to deal with. Whilst I refuse to be walked over by unfair and underhand practices and I feel my co-tenants were of a similar temperament, it worried us that others could be taken advantage of in a similar manner. In short, if you're a tenant and it looks like you're stumbling into a similar situation as that I've described, it's <a href="http://www.tenantsact.org.au/contactUs/Tenants-Advice-Service?ref=adammalone.net">free to seek legal advice</a>.</p><p>If you're an incredibly proactive person who doesn't mind repeated follow-up phone calls to check things are being done then you'll do fine. If unreplied emails are your thing, then rent with PRD. If you like long walks to Kingston in order to speak with them in person then you've got the right company. If. on the other hand, you like efficiency and not being taken for a ride; PRD is not the right company for you.</p><p>Similarly, prospective landlords tempted by PRD as a management company should also think twice before signing up. My understanding of the matter is that PRD is as lacklustre an agent for landlords as it is for tenants.</p><p>So, in conclusion, PRD suck and I'll never use their services again. I look forward to either comments refuting my claims or a concerted effort from the company to not suck in future.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Tracing errors in Drupal ]]></title>
        <description><![CDATA[ I get to see a lot of different Drupal issues in my day to day at Acquia
[https://www.acquia.com/]. From relatively simple problems that can be fixed
with altered configuration, a cache clear or a little custom code, to almost
untraceable bugs with roots deep in the Drupal ]]></description>
        <link>https://www.adammalone.net/tracing-errors-drupal/</link>
        <guid isPermaLink="false">5f3398c821b8f9692ae9385f</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 25 Apr 2014 09:25:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1503435980610-a51f3ddfee50?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p>I get to see a lot of different Drupal issues in my day to day at <a href="https://www.acquia.com/?ref=adammalone.net">Acquia</a>. From relatively simple problems that can be fixed with altered configuration, a cache clear or a little custom code, to almost untraceable bugs with roots deep in the Drupal bootstrap; where those debugging must tread with care. Usually, we exist as the resource consulted with, after the customer is at their wit's end and finds themselves needing us to shed light on what seems like an impossible situation.</p><p>In early February, and with the clock ticking on an impending launch, one such customer got in touch with a bug that had stumped everyone on their team. One of the first steps we take when trying to get to the bottom of a bug for a customer is to attempt to replicate the issue. This allows us to start narrowing down and focusing in on the cause in order to head towards the process of finding a fix.</p><p>The problem with this particular issue was that replication was both unreliable and fluxional. On <em><em>occasions</em></em>, pages would timeout, return partial content and tokens would not be converted correctly. The best clues we had to go on was that these incidents occurred on cache clears and cron runs… <em><em>sometimes</em></em>. Attempts to replicate the illusive issues in the customer’s development and staging environments failed with no errors observed. Additionally, replication of the problems was similarly temperamental on a local clone, no matter how many clones were attempted. The overarching symptom, however, appeared to be related to tokens; with the page title remaining the unparsed '[site:name] | [node:title]'.</p><p>In addition to the problems above, we were hindered by two additional ancillary issues that got in the way of debugging:</p><ul><li>Pending database updates, each of which could have both caused or exacerbated the problems</li><li>Huge numbers of notices in the log and appearing on execution of actions with <a href="https://github.com/drush-ops/drush?ref=adammalone.net">drush</a>. This made it hard to determine an obvious error (remember this for later).</li></ul><h3 id="digging-deeper"><strong>Digging deeper</strong></h3><p>At this point in the debugging cycle, it was desired to create an environment that matched 1:1 with the production environment. Where production utilised dedicated load balancers, many webservers and highly available database servers; the staging and development environments did not. A replica environment was created in order to match exactly and provide the best route towards a successful replication. With more resources pulled in to assist in the debugging effort, and the urgency increasing due to the countdown to launch we were able to echo what was occurring on production to about the same level of reliability; on cache clears and cron runs… <em><em>sometimes</em></em>.</p><p>The combination of an exact replica environment, with more eyes on the problem, told us that with both cache flush operations and cron runs, certain caches were purged. The idea then, was that incorrect values were getting stored in the cache, after only a partially complete cache set. It was then decided to completely disable the cache on the replica environment. This would remove a variable we couldn't control, as well as follow our instincts to determine why cache wasn’t being set correctly. This was achieved with the following placed in the site's settings.php as directed in <a href="https://drupal.org/node/797346?ref=adammalone.net">caching documentation</a>.</p><pre><code class="language-bash">$conf['cache_backends'][] = './includes/cache-install.inc'; $conf['cache_default_class'] = 'DrupalFakeCache';</code></pre><p>This key step allowed us to reliably replicate all issues on every page load, a gold-mine in debugging terms, which brought us a lot closer to being able to get to the bottom of the issue. The decision was made to run through an entire bootstrap with <a href="https://xdebug.org/?ref=adammalone.net">XDebug</a> to analyze the page load and variables.. This allowed us to observe all variables at every stage of the Drupal bootstrap; a task which paid off with no exceptions here. Next, we started looking from tokens module to core and the operations that happened early on in the bootstrap. Any time I'm debugging in module.inc or Entity API, I keep getting that nagging feeling that I've gone way too deep, then dive in to go deeper. Very much inception tactics of debugging.</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/axrjd2g.jpg?itok=vMSiodms" class="kg-image" alt></figure><p>Some of the variable enumeration that XDebug aided with showed me that when modules were initially loaded, only half got loaded successfully. One of the unloaded victims was <a href="https://api.drupal.org/api/drupal/modules!system!system.module/7?ref=adammalone.net">system.module</a> which contains token hooks. These hooks were not being placed in the cache so [page:title] wasn't getting replaced!</p><h3 id="the-cause"><strong>The Cause</strong></h3><p>When Drupal first loads all its modules in <a href="https://api.drupal.org/api/drupal/includes!bootstrap.inc/function/calls/_drupal_bootstrap_variables/7?ref=adammalone.net">_drupal_bootstrap_variables()</a> via <a href="https://api.drupal.org/api/drupal/includes!module.inc/function/module_load_all/7?ref=adammalone.net">module_load_all()</a>, it runs through the list of modules and includes them for later execution. Drupal takes the functions and hooks from each module and stores them in the cache so hooks may be implemented at will later on without having to scan all the files again.</p><p>What if there’s some kind of error in the include of a module, when Drupal is running on a freshly cleared cache_bootstrap? Rather than completing the bootstrap, Drupal will divert to its error functions and stop execution of the initial bootstrap. Since Drupal can only get part-way into its bootstrap, it can only cache some of the hooks. This caused a number of modules to not be loaded correctly and as such, their hooks weren’t available later on in the Drupal page load when they were required. Remember how tokens weren’t being replaced in the page title, that was due to some implementations of <a href="https://api.drupal.org/api/drupal/modules%21system%21system.api.php/function/hook_token_info/7?ref=adammalone.net">hook_token_info</a> not being discovered and cached.</p><p>On subsequent page loads, since there are already cached items (albeit incomplete items), Drupal will adhere to what it has in its cache. The problem that caused all of this was a simple error in a custom module although it was made far worse by the large number of notices flooding the logs. These hid the one notice that was causing this chain of errors. A single line including a file that didn’t exist with no additional error handling was the root cause of all the problems and commenting out the line fixed absolutely everything.</p><h3 id="what-we-can-learn-from-this"><strong>What we can learn from this?</strong></h3><p>This just goes to show how a small error can be made far more problematic when the actual issue isn't easily observed due to huge numbers of other errors. An apt saying at this point is that we couldn't see the wood from the trees in terms of notices to know which was pertinent. Another issue was that the file include was not wrapped in any sort of check to determine if the file existed. Nor was it preceded by an '@' which would have suppressed the errors in a similar manner. Additionally, the red herring of ~30 database updates that <em><em>could not</em></em> be completed served as a distraction to the ultimate goal of finding this issue. A clean codebase with up-to-date modules and themes spawning no errors would have had this resolved in a fraction of the time.</p><p>Defensive coding, here, would have helped but I'm just glad I got to work on another crazy Drupal issue!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ DrupalSouth Wellington 2014 ]]></title>
        <description><![CDATA[ DrupalSouth [https://drupalsouth2014.drupal.org.nz/], over the weekend of the
14th-16th February 2014 was another occurrence of the annual series of larger
scale antipodean Drupal meet ups. Decision makers, developers, sysadmins project
managers and more from all over Australia, New Zealand and beyond came to
Wellington in the North ]]></description>
        <link>https://www.adammalone.net/drupalsouth-wellington-2014/</link>
        <guid isPermaLink="false">5f33986621b8f9692ae9384b</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 23 Mar 2014 21:20:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1558482623-d1507c001b57?ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;fm&#x3D;jpg&amp;crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;w&#x3D;2000&amp;fit&#x3D;max&amp;ixid&#x3D;eyJhcHBfaWQiOjExNzczfQ" medium="image"/>
        <content:encoded><![CDATA[ <p><a href="https://drupalsouth2014.drupal.org.nz/?ref=adammalone.net">DrupalSouth</a>, over the weekend of the 14th-16th February 2014 was another occurrence of the annual series of larger scale antipodean Drupal meet ups. Decision makers, developers, sysadmins project managers and more from all over Australia, New Zealand and beyond came to Wellington in the North Island to learn about and discuss the open source project that’s both employed us and brought us together; <a href="https://drupal.org/home?ref=adammalone.net">Drupal</a>.</p><p>I was lucky enough to be selected to <a href="https://drupalsouth2014.drupal.org.nz/sessions/business-strategy/how-not-fail-launch?ref=adammalone.net">present a session at the conference</a> after submitting a topic that echoes what I deal with frequently. A lot of our customers launch new sites on our platform at <a href="https://www.acquia.com/?ref=adammalone.net">Acquia</a>, be that a simple DNS cutover from another provider or a whole newly developed site. Usually, we’ve assisted in preparing the site for launch well before the date of no return and the launch is successful and quiet. However, once in a blue moon, if launch is pulled forward dramatically, a timeline isn’t followed, code changes are pushed at the last minute, or just an unforeseen scenario occurs, we get called in.</p><p>One of the hats I juggle as part of the customer solutions team is the task of necromancy; bringing websites and servers back from beyond the grave. When sites tank, myself and colleagues from almost ten different time zones are available to jump in and get them rolling again. With site launches being a highly watched, increased pressure period in the lifetime of the site, it needs to be done right first time or it could spell the end of a project entirely.</p><p>The session I presented offered a basic rundown of sensible site launch tasks and timelines. It also acted as a showcase for some of the more interesting launches I've seen with examples of launch <em><em>almost</em></em>-failures and steps for mitigation. While the <a href="/sites/adammalone/files/how-to-not-fail-on-launch.pdf">slides</a> themselves may not be very revealing of the talk, I certainly plan to build on this and use it for future Drupal gatherings. Site launches are the sharp end of developing a site and everyone across the team has a role to play in making it a success. While web technologies and offerings come and go, I'd imagine a launches are fairly consistent.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Acquia and Drupal in APAC ]]></title>
        <description><![CDATA[ The original timestamp on this post was back in April, shortly after I started with Acquia. However, as has been the case with a number of my commitments, I&#39;m only getting round to writing this now. An entirely non-technical article, I decided I&#39;d like to get ]]></description>
        <link>https://www.adammalone.net/acquia-and-drupal-apac/</link>
        <guid isPermaLink="false">5f33981b21b8f9692ae93838</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 09 Dec 2013 21:00:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/acquia-logo.png" medium="image"/>
        <content:encoded><![CDATA[ <p>The original timestamp on this post was back in April, shortly after I started with <a href="https://acquia.com/?ref=adammalone.net">Acquia</a>. However, as has been the case with a number of my commitments, I'm only getting round to writing this now. An entirely non-technical article, I decided I'd like to get some thoughts down on 'paper' about my transition to the role at Acquia, what the job entails and where I think the company will go within Asia Pacific in the future.</p><p>Clearly, the obvious disclaimer of speaking as an individual, not on behalf of the company, applies here although it's probably a good idea for me to make that explicit.</p><h3 id="the-how"><strong>The how</strong></h3><p>During <a href="https://sydney2013.drupal.org/?ref=adammalone.net">DrupalCon Sydney</a>, I met a bunch of Acquians. A sea of blue in the Drupal drop shirts easily distinguishable when observing the crowd. In turn, I was approached and asked if I'd like to check how good the website I was working on was, using <em><em><a href="https://insight.acquia.com/instant-insight?ref=adammalone.net">instant insights</a>. </em></em>Apprehensively, I offered the address of the <a href="https://ama.com.au/?ref=adammalone.net">Federal AMA</a>, the website we'd just finished migrating from Drupal 6, and waited with bated breath.</p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/insight_aplus.png" class="kg-image" alt loading="lazy"></figure><p>Luckily, the result came back as an A+ and suddenly my efforts of the previous months were not in vain. Additionally, the privilege of an A+ means a free T-shirt and who turns down free shirts!</p><p>A little while after the conference, contact was made and the rest is; not history, more, the recent past.</p><h3 id="on-joining"><strong>On joining</strong></h3><p>Joining the 'club' was a little daunting if truth be told, although absolutely not required. I was flown to <a href="https://goo.gl/GosRon?ref=adammalone.net">HQ in Boston</a>, onboarded with the company and made to feel very welcome by my new colleagues. I might have overdressed a little for my <em><em>first day</em></em>​; opting for the dress pants and shirt combo that adorned me at my previous place of employ. If anyone asks though, I'll maintain it was so I could <a href="https://www.youtube.com/watch?v=rKSvO9ncy3E&ref=adammalone.net">style on everyone</a>.</p><p>The whirlwind tour of the office was solidified by being introduced to all the members of my team, sitting with them while they worked, taking part in team meetings and generally starting to feel out my place and where I belonged.</p><h3 id="on-working"><strong>On working</strong></h3><p>The role I'm in provides me with the opportunity to get hands on with hundreds of different codebases and attempt to find solutions to some of the most interesting Drupal problems/implementations. As well as this, I'm able to present myself in an advisory capacity on dedicated channels where specific expertise is required for either planning, implementing or troubleshooting a business/code arrangement.</p><p>The job really flies when emergency situations land. Perhaps a site has just launched and it tanks due to higher than expected load, maybe a coding <a href="https://www.urbandictionary.com/define.php?term=snafu&ref=adammalone.net">snafu</a> has brought a site to its knees, or it could just be malicious attacks. Any of these situations, in a myriad of contributing factors can occur, and it's a part of my job to fend off attackers, hotfix pain code and ensure Drupal is performing as well as it can. Some issues will literally have a number of team members collaborating for hours, sometimes days, to keep the sites up and we do it well, which is always a plus!</p><p>As well as deepening my knowledge of the ever moving drop, a lot of Acquia is about the infrastructure. Extending my comprehension of <a href="https://www.varnish-cache.org/?ref=adammalone.net">Varnish</a>, <a href="https://memcached.org/?ref=adammalone.net">memcache</a>, server sizing, <a href="https://wiki.centos.org/HowTos/Network/IPTables?ref=adammalone.net">iptables</a>, <a href="http://puppetlabs.com/?ref=adammalone.net">puppet</a>, <a href="https://php.net/manual/en/install.fpm.php?ref=adammalone.net">FPM</a>, CGI and much more is a direct result of being around all of them every day.</p><p>If this is something you think you could see yourself doing, we're always hiring so do <a href="https://www.adammalone.net/contact-me">get in contact with me</a>.</p><h3 id="on-the-future"><strong>On the future</strong></h3><p>Since Acquia is indelibly tied to Drupal, the growth of Drupal within the <a href="https://en.wikipedia.org/wiki/Asia-Pacific?ref=adammalone.net">Asia Pacific</a> region directly affects me. I've made some of my own assumptions about the future of Drupal within the APAC. They may turn out to be entirely wrong, but it's good to have something on 'paper' that I can refer to in the future if they eventuate.</p><p><strong><strong>Australia</strong></strong></p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/large/public/apac-au.png?itok=chOl8zm_" class="kg-image" alt loading="lazy"></figure><p>Compared to the US and Europe, Australia has a <em><em>maturing</em></em> Drupal ecosystem. The <a href="https://groups.drupal.org/government-sites?ref=adammalone.net#Australia">government has started using Drupal</a> and <a href="https://en.wikipedia.org/wiki/Open_source?ref=adammalone.net">open source</a> has the <a href="http://drupalact.org.au/events/drupalgov-canberra-2013/conference/keynotes?ref=adammalone.net">backing</a> from the <a href="http://delimiter.com.au/2013/04/11/pia-waugh-takes-control-of-data-gov-au/?ref=adammalone.net">right</a> <a href="https://egovau.blogspot.com.au/2013/11/whos-open-sourcing-in-australian.html?ref=adammalone.net">people</a>. We have a bunch of really top rated Drupal shops evangelising the usefulness of Drupal to clients and a number of well populated Drupal user groups in the major cities.</p><p>Whilst Australia hasn't broken out completely in the same way that Europe and the US are continuing to do, it's on the edge which makes it exciting to be here with <a href="http://www.theage.com.au/it-pro/government-it/opensource-platform-gains-popularity-in-government-20130924-hv1sj.html?ref=adammalone.net">Drupal doing so well</a>.</p><p><strong><strong>Singapore</strong></strong></p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/large/public/apac-sg.png?itok=iBxZ_hKA" class="kg-image" alt loading="lazy"></figure><p>With a number of <a href="http://www.olindata.com/?ref=adammalone.net">companies</a> and <a href="http://www.straitstimes.com/?ref=adammalone.net">organisations</a> on the island city converting to Drupal and a <a href="http://www.drupalcamp.sg/?ref=adammalone.net">DrupalCamp this year</a>, it's my opinion that Singapore will be one of the next key areas of Drupal usage after Australia. A relatively small country, with a number of the world's largest businesses holding major offices in a confined space, there is a density of technology that breeds itself to the rapid sharing of ideas and solutions. I enjoy working with Singaporean clients, all of whom are eager to lap up all the Drupal knowledge I can provide so this bodes well for the future of Drupal there!</p><p><strong><strong>China</strong></strong></p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/large/public/apac-cn.png?itok=cE6PJjzu" class="kg-image" alt loading="lazy"></figure><p>Whilst traditionally China has typically been slow to react to recent developments in tech; all that's changing. Again, like Singapore, the <a href="https://groups.drupal.org/node/10897?ref=adammalone.net">DrupalCamp and user group</a> scene has made me believe we'll be seeing some cool things out of the middle nation. A close friend of mine, and Drupalist, who himself hails from China receives frequent offers for work in start-ups in both Shanghai and Beijing. A sign that, like Australia, the jobs are there, but people with the expertise required are in short supply.</p><p><strong><strong>Japan</strong></strong></p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/large/public/apac-jp.png?itok=vQwxSt70" class="kg-image" alt loading="lazy"></figure><p>Similar to the above countries, Japan is developing a great Drupal culture, with a number of good services and <a href="http://www.jaypan.com/?ref=adammalone.net">well known names</a> from the community working from there. The beginnings of a healthy community backed by user groups and events will bring Drupal to the fore, allowing more to become aware of the software, empowering companies to take the plunge into open source confident that there is support both in the community and the wider world ready to assist.</p><h3 id="anticipation"><strong><strong>Anticipation</strong></strong></h3><p>Overall, I'm very excited to be in the APAC sector of the world within this industry at this time. Like <a href="https://en.wikipedia.org/wiki/Adam_Smith?ref=adammalone.net">Adam Smith</a> standing on the precipice of the <a href="https://en.wikipedia.org/wiki/Industrial_Revolution?ref=adammalone.net">industrial revolution</a>, I can feel stirrings in the underlying structure of traditional content management. No longer are companies and individuals prepared to pay licenses for products of comparable (or lower) quality to those available without charge. With the internet and <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software?ref=adammalone.net">FOSS</a> culture permitting anyone to learn about certain technologies, Drupal included, there is automatically a draw to study without fee.</p><p>As a self taught example of this very situation, I'm confident that others sharing my passion and drive not only exist, but are already picking up the vital skills and experience necessary to push Drupal to its activation energy (I'm a Chemist remember) after which follows, like an exothermic reaction, huge uptake and a boom in usage .</p><p>Much like those pondering an early investment in Google or Apple, throw yourself in with the open source ticket in APAC because things are going up.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ When APC appears to fail Drupal ]]></title>
        <description><![CDATA[ There are quite a large number of resources on the internet that exist due to
the error that springs up on Drupal [https://drupal.org/] sites with APC
[https://pecl.php.net/package/APC] enabled:

Cannot redeclare class insertquery_mysql in /path/to/drupal/includes/database/database.inc on line ]]></description>
        <link>https://www.adammalone.net/when-apc-appears-fail-drupal/</link>
        <guid isPermaLink="false">5f3397a221b8f9692ae9381e</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 15 Nov 2013 12:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/pear_-_php_extension_and_application_repository.png" medium="image"/>
        <content:encoded><![CDATA[ <p>There are quite a large number of resources on the internet that exist due to the error that springs up on <a href="https://drupal.org/?ref=adammalone.net">Drupal</a> sites with <a href="https://pecl.php.net/package/APC?ref=adammalone.net">APC</a> enabled:</p><pre><code class="language-bash">Cannot redeclare class insertquery_mysql in /path/to/drupal/includes/database/database.inc on line 1774.</code></pre><p>Whilst many of them provide work arounds like hacking core to <a href="http://help.getpantheon.com/pantheon/topics/fatal_error_require_once_cannot_redeclare_class_insertquery_mysql_in_srv_bindings_e00a4bcbe3264dbc99e16a53cf85d78b_code_includes_database_database_inc_on_line?ref=adammalone.net" rel="nofollow">include a class_exists() conditional</a>, simply <a href="http://help.getpantheon.com/pantheon/topics/fatal_mysql_error_for_class_insertquery_mysql?ref=adammalone.net" rel="nofollow">clearing the cache</a>, <a href="https://stackoverflow.com/questions/4575341/php-with-apc-fatal-errors-cannot-redeclare-class?ref=adammalone.net">disabling APC</a>, <a href="https://drupal.org/node/838744?ref=adammalone.net#comment-6970108">upgrading</a> to a (now <a href="https://bugs.launchpad.net/ius/+bug/1115670?ref=adammalone.net">non-existant</a>) version of APC or just state that since it's not Drupal it's '<a href="https://drupal.org/node/838744?ref=adammalone.net#comment-5177030">not our problem</a>'; none really look to addressing the root cause of the issue.</p><p>This summoned a memory of my high school history tuition where I studied, and greatly enjoyed, the topic of <em><em>Medicine through the ages.</em></em> A saying that was first used by the Greeks and Romans in the time of Asclepius was that '<em><em>Prevention is better than cure'</em></em>. Admittedly, they probably didn't use that phrase but the sentiment was definitely understood that a healthy, hygienic lifestyle would suppress and reduce the occurrence of illness. In the same way, it is a preferred method to find the root cause of a bug rather than attempt to work around the issue and increase obscurity.</p><p>By downloading and running a code sniffer over some of your custom modules, it's likely you'll turn up some interesting results.</p><h3 id="installing-phpcs"><strong>Installing PHPCS</strong></h3><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/ew9c3wz.jpg_480x360_.png?itok=pPKRUgDk" class="kg-image" alt></figure><p>Most operating systems/distros will have their own way of code sniffing PHP for syntax, errors, coding standards and the like. A combination of either pear install PHP_CodeSniffer or brew install php-code-sniffer should be sufficient.</p><p>This will download the <a href="https://pear.php.net/package/PHP_CodeSniffer/?ref=adammalone.net">PHPCS package</a> and allow you to use the code sniffer to tell you how bad your code and make you feel bad.</p><h3 id="tracing-the-bug"><strong>Tracing the bug</strong></h3><p>Whilst this isn't a certain surefire fix to the redeclared class issue, fixing this has on occasion made that go away. Since PHP 5.3, <a href="https://php.net/manual/en/language.references.pass.php?ref=adammalone.net">passing a variable by reference</a> has been deprecated with PHP 5.4 removing it entirely. Passing by reference is now implicit so no additional declarations are required except for on the function declaration.</p><p>We can detect where these call time pass by reference issues arise with the following line for our code sniffer.</p><p>phpcs --extensions=php,module,inc,install --standard=Generic --sniffs=Generic.Functions.CallTimePassByReference /path/to/modules</p><p>This takes any files with the php, module, inc or install extension and runs the sniff for call time pass by reference notices/errors. This will output something similar to the following for an error that needs fixing:</p><pre><code class="language-bash">FILE: ...path/to/my_deprecated_code.module--------------------------------------------------------------------------------
FOUND 1 ERROR(S) AFFECTING 1 LINE(S)--------------------------------------------------------------------------------
36 | ERROR | Call-time pass-by-reference calls are prohibited--------------------------------------------------------------------------------</code></pre><p>From here it's trivial to alter the function call so my_function(&amp;$var); becomes my_function($var); and with the errors disappearing, so too, hopefully, will the redeclaration errors!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Explaining Varnish for Beginners ]]></title>
        <description><![CDATA[ A short time ago I published a presentation I gave at DrupalACT
[http://drupalact.org.au/] entitled &#39;Varnish for Beginners
[/post/varnish-beginners]&#39;. Whilst the presentation itself went down well and
those attending hopefully garnered a good amount of knowledge; without me to
talk over it, there aren&#39; ]]></description>
        <link>https://www.adammalone.net/explaining-varnish-beginners/</link>
        <guid isPermaLink="false">5f33970c21b8f9692ae93804</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 17 Sep 2013 11:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/varnish-bunnies.png" medium="image"/>
        <content:encoded><![CDATA[ <p>A short time ago I published a presentation I gave at <a href="http://drupalact.org.au/?ref=adammalone.net">DrupalACT</a> entitled '<a href="/post/varnish-beginners">Varnish for Beginners</a>'. Whilst the presentation itself went down well and those attending hopefully garnered a good amount of knowledge; without me to talk over it, there aren't a huge amount of words to explain Varnish more deeply.</p><h3 id="what-is-varnish"><strong>What is Varnish?</strong></h3><p><a href="https://www.varnish-cache.org/?ref=adammalone.net">Varnish</a> is a reverse proxy HTTP accelerator that is often placed in front of <a href="https://www.drupal.org/?ref=adammalone.net">Drupal</a> sites to act as a first line of defense against the swathe of anonymous users who're likely wanting to view all the interesting content present. Because Varnish is a separate service, it doesn't matter whether the web server that the Drupal site runs on uses <a href="https://httpd.apache.org/?ref=adammalone.net">Apache</a>, <a href="https://nginx.org/?ref=adammalone.net">NGINX</a>, <a href="https://directory.fsf.org/wiki/Comanche_Server?ref=adammalone.net">Comanche</a>, <a href="http://www.iis.net/?ref=adammalone.net">IIS</a> or another piece of software, it'll just work.</p><p>Varnish itself acts as a transparent go-between between the users and the web server backend, with any content surfaced from the backend with the correct cache headers being stored for a limited time. The advantages of Varnish are many, with the main ones being:</p><ul><li>Serving content from an in-memory cache means no slow PHP execution and no slow MySQL queries;</li><li>Varnish is capable of delivering at a rate that makes it an <a href="https://groups.drupal.org/node/25617?ref=adammalone.net">F-15 to vanilla Drupal's Cessna</a>;</li><li>The headers are respected entirely from Drupal so unless something is specifically overridden in the Varnish configuration, whatever Drupal says to cache, Varnish will cache.</li></ul><p>All of these together make websites behind Varnish <strong><strong>fast</strong></strong>! Even though the effects are only felt by anonymous users, the majority of traffic for most sites is likely anonymous so the benefit would be great.</p><h3 id="alternatives"><strong>Alternatives</strong></h3><p>Whilst Drupal has a number of different potential caching strategies, it's arguably Varnish that provides the easiest to set up when balanced against speed benefits.</p><ul><li>Drupal's built in database cache system is locked to the database, a slow system compared to memory.</li><li>The contributed module '<a href="https://drupal.org/project/boost?ref=adammalone.net">Boost</a>' is generally an acceptable choice on memory poor servers but caches pages as files in the filesystem. This can lead to problems on network filesystems like <a href="http://www.gluster.org/?ref=adammalone.net">gluster</a> due to the high rate of file read/write operations performed.</li><li><a href="https://memcached.org/?ref=adammalone.net">Memcache</a> is a great choice that works well with Varnish although is primarily used for lower level cache (ie caching bootstrap modules and views). Memcache forms a cache backend that Drupal can interact with actively to set and get cached items where Varnish is a passive cache that Drupal <em><em>does not know about</em></em>.</li><li>Alternative caches like <a href="http://redis.io/?ref=adammalone.net">Redis</a> and <a href="https://www.mongodb.org/?ref=adammalone.net">MongoDB</a> do exist, but in a similar way to Memcache, to act as an active cache the Drupal site can set and get items from. Drupal also requires a little work to get Redis or Mongo working which makes it a little more technical to implement.</li></ul><h3 id="installation-configuring-and-vcl"><strong>Installation, configuring and VCL</strong></h3><p>Even though <a href="https://www.centos.org/?ref=adammalone.net">CentOS</a> requires an additional repository, installing Varnish is trivial on Debian, Red Hat and OSX distributions.</p><pre><code class="language-bash">(yum|apt-get|brew) install varnish</code></pre><p>is enough to get Varnish on the server and ready to start. The default configuration options in /etc/sysconfig/varnish or /etc/default/varnish should likely be changed to listen on port 80 and have a memory limit sufficient for the server.</p><pre><code class="language-bash">DAEMON_OPTS="-a :80 \
        -u varnish \
        -g varnish \
        -T localhost:6082 \
        -f /etc/varnish/adam.vcl \
        -S /etc/varnish/secret \
        -s malloc,256M";</code></pre><p>The <strong><strong>V</strong></strong>arnish <strong><strong>C</strong></strong>onfiguration <strong><strong>L</strong></strong>anguage file can be used to route requests to the cache or the backend depending on logic defined in the VCL. Whilst the default VCL bundled with Varnish does an acceptable job of handling a cache and speeding up sites for anonymous users, a lot may be changed to make pages more cacheable. Following guidance provided by the <a href="https://fourkitchens.atlassian.net/wiki/display/TECH/Configure+Varnish+3+for+Drupal+7?ref=adammalone.net">fourkitchens VCL</a> and with additional logic from the <a href="https://www.varnish-cache.org/docs/3.0/reference/vcl.html?ref=adammalone.net">Varnish documentation pages</a> you can ensure you have a cache that flies!</p><h3 id="shielding-drupal-from-the-internet"><strong>Shielding Drupal from the Internet</strong></h3><p>Whilst I'm a huge advocate and evangelist for Drupal, and the ease at which sites may be created and extended; it must also be said that when a Drupal site is exposed to the internet and the potentially thousands of users who hit the site, it can struggle. With this in mind, we need to shield Drupal from the anonymous users, bots and spammers so Drupal can go on managing content without the relentless barrage of hits.</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/drupal_backend_connections.png" class="kg-image" alt></figure><p>The above picture shows a number of the backend connections that Drupal makes on a typical site. With data stored in the database, and cache stored in Memcache, these contribute to a lot of the network back and forth of the Drupal CMS. Occasional additional connections to <a href="http://mollom.com/?ref=adammalone.net">Mollom</a> to counter spam, Drupal's update service and Google services mean that each time a user request skips Varnish and makes it to the backend, there's more delay. Any additional interactions with a local or remote service will both increase the time taken for the response and potentially overload the server.</p><p>By stopping user requests getting to the backed we can prevent the execution of PHP and prevent slow queries to database and other services†</p><p>† <em><em>Slow in comparison to fast in-memory caching from Varnish </em></em></p><h3 id="varnish-tools"><strong>Varnish Tools</strong></h3><p>If you're looking to provide a little proof to those who make the business decisions about implementing Varnish, if you want to compile some stats about the software, if there's debugging to be done, or if you just want to see some cool graphs there are a number of tools that Varnish provides for all these purposes</p><ul><li><em><em>varnishstat</em></em> - Provides a live view of a number of stats provided by Varnish about the state of the ;</li><li><em><em>varnishlog</em></em> - Provides more information than is even possible to comprehend that can be grepped and processed to provide further information, usually for debugging purposes;</li><li><em><em>varnishncsa</em></em> - Gives an output from Varnish that mimics that of an Apache access log</li><li><em><em>varnishtop</em></em> - Varnish provides a ranked list of entries, especially useful when used with the -i RxURL / TxURL parameters to show top requests and top requests that miss Varnish.</li><li><em><em>varnishhist</em></em> - Best used with the '-d' flag, provides a histogram where the '|' indicates a cache hit and the '#' a cache miss; the units being hits vs time. An example varnish histogram is beneath for viewing enjoyment!</li></ul><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/varnishhist.png" class="kg-image" alt></figure><h3 id="checking-varnish-works"><strong>Checking Varnish works</strong></h3><p>One of the simplest way to ensure Varnish is working as expected is to query it using <a href="http://www.isvarnishworking.com/?ref=adammalone.net">isvarnishworking.com</a>, a quick and easy method of observing Varnish headers. This is a lazy way of either curling the site or inspecting headers using the browser which would produce the following results:</p><pre><code class="language-bash">$ curl -kLsiXGET www.adammalone.net ...snip... Cache-Control: public, max-age=21600 Last-Modified: Tue, 17 Sep 2013 00:00:15 +0000 Date: Tue, 17 Sep 2013 09:32:36 GMT X-Varnish: 1607010525 1607008083 Age: 12741 Via: 1.1 varnish X-Varnish-Cache: HIT</code></pre><h3 id="try-varnish-out-"><strong>Try Varnish out!</strong></h3><p>Since Varnish is so simple to set up and start using from a beginner level I'd recommend everyone try installing it to gain benefits from the speed boosts! All <a href="https://www.acquia.com/?ref=adammalone.net">Acquia Cloud</a> servers come with Varnish in front of Drupal so if you want the most hassle free way of seeing the benefits of Varnish then sign up to a <a href="https://insight.acquia.com/free/register?ref=adammalone.net">free account</a> and fly with the cache!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Using git &amp; drush to win at workflow ]]></title>
        <description><![CDATA[ As a sole developer or even as part of a small team (2-5 developers), setting up
a development workflow seems on the face of it like a waste of time. Every
minute that you&#39;re configuring version control, writing backup scripts,
manicuring new environments, or simply tagging and pushing ]]></description>
        <link>https://www.adammalone.net/using-git-drush-win-workflow/</link>
        <guid isPermaLink="false">5f33966a21b8f9692ae937e5</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 03 Sep 2013 09:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/drupal-git-love.png" medium="image"/>
        <content:encoded><![CDATA[ <p>As a sole developer or even as part of a small team (2-5 developers), setting up a development workflow seems on the face of it like a waste of time. Every minute that you're configuring version control, writing backup scripts, manicuring new environments, or simply tagging and pushing code around seems less fun than if those minutes were spent coding.</p><p>What happens though, when:</p><ul><li>A change is made on a production website that causes an irreversible database change</li><li>You want to see which change made in the last two weeks caused a performance regression</li><li>Management want to QA your work prior to release</li><li>You want to dig deep into an issue that requires debug code out of your eyeballs</li><li>Someone asks about your deployment strategy</li></ul><p>As someone who has progressed from not knowing what <a href="https://en.wikipedia.org/wiki/HTML?ref=adammalone.net">HTML</a> is to someone who <em><em>pretends</em></em> to know a little about <a href="https://www.drupal.org/?ref=adammalone.net">Drupal</a> I most certainly fell victim to the trap of coding on production for a while. I had no <a href="https://en.wikipedia.org/wiki/Revision_control?ref=adammalone.net">version control</a>, no backup strategy and if the worst had happened, I would have lost <strong><strong>ALL</strong></strong> of my content.</p><p>Since I've started working at <a href="https://www.acquia.com/?ref=adammalone.net">Acquia</a>, I've become intimately familiar with developmental workflow employed by enterprise level sites, where QA is mandatory and releases have to be signed off. These are sites with their own development teams and internal issue trackers to fill. The tools on the Acquia Cloud are, to put it simply, fantastic. Although as a solo developer myself, I'm aware that they're also beyond the needs of a lot of experimentalists and hackers.</p><p>This is where a personal developmental workflow strategy comes into play, and where good habits may be picked up.</p><h3 id="drush"><strong>Drush</strong></h3><p>Standing for <strong><strong>DRU</strong></strong>pal <strong><strong>SH</strong></strong>ell, Drush is a command line tool for managing Drupal sites. Most of the useful commands may be found by visiting the <a href="http://drush.ws/?ref=adammalone.net">online man pages</a> so I'll only touch a little on the commands and more on the configuration of Drush such that it may become a tool for locally managing remote sites.</p><h3 id="git"><strong>Git</strong></h3><p>With Drush managing the Drupal site we'll also need something to manage the code that runs the site; this is where version control comes in. Version control provides the ability to ensure complete accountability at every step in the development cycle with the handy benefit of backing up your code! Whilst there is a light hearted rivalry between proponents of <a href="https://git-scm.com/?ref=adammalone.net">git</a> and <a href="https://subversion.apache.org/?ref=adammalone.net">SVN</a>, I feel that git comes more naturally to me. With this in mind I'll be focusing on git as a VCS over the course of this guide with SVN being, unfortunately, out of scope.</p><h3 id="workflow"><strong>Workflow</strong></h3><p>The simplest way to explain good workflow for a solo/small team is with the following:</p><ol><li>Use a local environment to make changes pertinent to a single issue;</li><li>Once the issue is thought to be solved, thoroughly test in an interim environment;</li><li>With testing complete, move the changes into production.</li></ol><p>This isolation of environments ensures that there is no pollution of bad code into a place that could affect visitors to the site, which in turn could affect business. It also allows for a production site to stay stable whilst a myriad of changes can occur locally which is conducive to an agile development environment.</p><p>Expanding on the above bullets we can see why this may have caused consternation in a traditional non drush/git system due to each step being a time sink. Since I'm such an <a href="/post/necessity-automation">automation advocate</a>, it's my opinion that a workflow must be efficient if people are to be expected to use it.</p><h3 id="the-old-inefficient-way"><strong>The old (inefficient) way</strong></h3><ol><li>Write some cool code that fixes a bug and updates a database schema</li><li>Use mysql-dump tool on a remote server to back-up the database</li><li>Copy that database to a store locally</li><li>Move the codebase to the test environment</li><li>Navigate to update.php</li><li>Click the button to accept database updates... <em><em>etc</em></em></li></ol><p>With git and drush this can happen <em><em>fast</em></em> and without requiring pages of instructions to remember steps.</p><h3 id="setting-up-your-local-and-remove-environments"><strong>Setting up your local and remove environments</strong></h3><p>Assuming the user has access to remote servers via SSH, the first thing that's required is to tell Drush about the remote servers and ensure it is able to get to the Drupal installs. By utilising a drush aliases file, this becomes trivial. I've included my <a href="https://gist.github.com/typhonius/6378209?ref=adammalone.net">alias file here</a> for easy copying and reference. Save your file in the ~/.drush folder as <strong><strong>aliases.drushrc.php</strong></strong> and ensure you alter the file to include your own server details. It's also important that the user that logs into the server has access to the docroot where the Drupal installation is and that Drush is installed on the remote server.</p><p>A simple test to ensure you can connect to your remote installation would be to run drush @alias status and see whether it returned the expected details. The elements of your aliases array defined in the aliases.drushrc.php file correspond to the identifier placed after the @ symbol in the above command.</p><p>It's also useful to set up a file to track custom Drush commands you find useful in ~/.drush/<strong><strong>drushrc.php</strong></strong>. I've included a snippet of <a href="https://gist.github.com/typhonius/6420061?ref=adammalone.net">mine here</a> so any commands I refer to further down the page won't seem so unfamiliar.</p><p>For your codebase, the lowest barrier to entry into version control is to create yourself a free <a href="https://bitbucket.org/?ref=adammalone.net">Bitbucket</a> account, head over to the <a href="https://bitbucket.org/repo/create?ref=adammalone.net">repo creation page</a> and follow the steps there.</p><p>After adding all code from one environment to git and pushing them to Bitbucket, you'll want to ensure that this is echoed across however many environments you have. The simplest way to do this is to temporarily move your existing, unversioned codebase out of the way and to pull down your versioned repository.</p><pre><code class="language-bash">cd /var/www/html
mv myawesomewebsite /tmp
git clone git@bitbucket.org:myusername/myawesomewebsite.git</code></pre><p>If you're working with production sites, I'd <strong><strong>strongly</strong></strong> recommend placing the production site in version control and then doing the above operation on all dev sites. This will ensure that you're not moving a live site and in turn ensuring the site does not experience downtime.</p><h3 id="some-workflow"><strong>Some workflow</strong></h3><p>From here, with Drush able to connect to the remote sites and with everything under git, it becomes trivial to make changes on a site whilst also using best practices in workflow. Taking the example of a single update to a single file (let's use the .htaccess), the following would be recommended.</p><pre><code class="language-bash"># Backup your production database
drush @prod sql-dump &gt; /path/to/local/backups.sql

# Sync your production database to you are working with the latest database across your environments:
drush @dev sync-dbs​
drush @test sync-dbs

# Add your changes to the git repository
git add .htaccess
git commit
git push

# Pull the latest commits and ensure the database is updated on your test environment
drush @test pulldb

# Utilise the test environment to check for errors and QA the update
# Switch production into maintenance mode (optional) and run the same operation as on @test
drush @prod offline
drush @prod pulldb
drush @prod online
</code></pre><p>Whilst this method won't have your production code on tags, it does reduce the barrier for developers to start using drush and git with speed and ease. Questions and comments should go here, get in touch with me <a href="/contact-me">over here</a> and I'll be updating this post with any improvements should they arise!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Best practices, workflow and how not to break your site ]]></title>
        <description><![CDATA[ On Friday August 23rd, I was part of a contingent of Acquia
[https://www.acquia.com/] employees attending a Drupal [https://www.drupal.org/] 
conference in my current home city of Canberra [https://goo.gl/rG0GS3].

DrupalGov Canberra [http://drupalact.org.au/events/drupalgov-canberra-2013] was
a day long event with ]]></description>
        <link>https://www.adammalone.net/best-practices-workflow-and-how-not-break-your-site/</link>
        <guid isPermaLink="false">5f33960821b8f9692ae937c9</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 29 Aug 2013 10:20:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/best_practices_drupalgov_canberra_-_google_drive.png" medium="image"/>
        <content:encoded><![CDATA[ <p>On Friday August 23rd, I was part of a contingent of <a href="https://www.acquia.com/?ref=adammalone.net">Acquia</a> employees attending a <a href="https://www.drupal.org/?ref=adammalone.net">Drupal</a> conference in my current home city of <a href="https://goo.gl/rG0GS3?ref=adammalone.net">Canberra</a>.</p><p><a href="http://drupalact.org.au/events/drupalgov-canberra-2013?ref=adammalone.net">DrupalGov Canberra</a> was a day long event with speakers and attendees from all over Australia. With Drupal already having a strong foothold in the private sector as the go-to CMS/CMF for driving some of the most <a href="https://dev.twitter.com/?ref=adammalone.net">popular</a>, <a href="http://www.almasryalyoum.com/?ref=adammalone.net">highly trafficked</a> <a href="https://www.x.com/?ref=adammalone.net">sites</a> online, and the <a href="https://petitions.whitehouse.gov/?ref=adammalone.net">US government</a> already seeing a lot of exposure to Drupal it's only natural the Australian government would want in!</p><p>The aim of the conference was to introduce CIOs, web managers and government development teams that Drupal is both applicable for <a href="https://groups.drupal.org/government-sites?ref=adammalone.net">government use</a>, as well as having a great community of companies and users to back the software and experience up.</p><p><strong><strong>On presenting</strong></strong></p><p>I was given the privilege of presenting a talk at the event and decided to discuss '<em><em>Best Practices and Workflow</em></em>' within a Drupal development environment which is something I've grown to respect and advocate. Throughout my learning of Drupal, I have simultaneously picked up knowledge of good developmental workflow and tips for not breaking Drupal. It's an easy decision to <em><em>quickly make a hacky change on production, </em></em>but by introducing good workflow and using tools like <a href="https://git-scm.com/?ref=adammalone.net">git</a> and <a href="https://drupal.org/project/drush?ref=adammalone.net">drush</a> it becomes easier to do things the right way.</p><p>I've made my presentation <a href="https://goo.gl/KZL2YH?ref=adammalone.net">available online from Google Drive</a>, or as a <a href="/sites/adammalone/files/best_practices_drupalgov_canberra.pdf">PDF download</a> although it may not make a lot of sense without me talking on top of it. With this in mind, I've <a href="/post/using-git-drush-win-workflow">written up a summary</a> of my personal development environment, workflow and included a git/drush cheat sheet for use in your own teams.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Varnish for beginners ]]></title>
        <description><![CDATA[ My first experience with Varnish [https://www.varnish-cache.org/] was whilst I
worked at Agileware [http://agileware.com.au/] and was required to create a 
Pressflow [http://pressflow.org/] Drupal 6 site for a company who were expecting
to receive a lot of traffic due to television advertising.

Now, since ]]></description>
        <link>https://www.adammalone.net/varnish-beginners/</link>
        <guid isPermaLink="false">5f3395cc21b8f9692ae937b7</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 13 Jun 2013 23:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/varnish-bunny.png" medium="image"/>
        <content:encoded><![CDATA[ <p>My first experience with <a href="https://www.varnish-cache.org/?ref=adammalone.net">Varnish</a> was whilst I worked at <a href="http://agileware.com.au/?ref=adammalone.net">Agileware</a> and was required to create a <a href="http://pressflow.org/?ref=adammalone.net">Pressflow</a> Drupal 6 site for a company who were expecting to receive a lot of traffic due to television advertising.</p><p>Now, since starting at <a href="https://acquia.com/?ref=adammalone.net">Acquia</a>, I've had a great deal more experience since each of our customers has their websites sitting behind Varnish.</p><p>With some of the initial knowledge learned at Agileware, some interim findings after researching Varnish outside of work and more tuition in the software at Acquia I've built up a good enough degree of knowledge to be able to pass that on to others interested.</p><p>Within this post, I've included a <a href="https://goo.gl/VjYv3?ref=adammalone.net">link to the presentation</a> on Google Drive and <a href="/sites/adammalone/files/varnish_for_beginners.pdf">downloadable slides</a> in PDF format. Feel free to use and extend as you see fit, just remember to attribute!</p><p>Any questions can be sent to the email included in the presentation or alternatively left as comments so others may benefit from the answers.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Website in a weekend ]]></title>
        <description><![CDATA[ After seeing a few of the Confession pages spring up on facebook and with the StalkerSpace craze a while ago I&#39;ve decided to undertake a little proof of concept.

How?

I always enjoy getting an enormous whiteboard, a number of intelligent minds and hashing out ideas to see ]]></description>
        <link>https://www.adammalone.net/website-weekend/</link>
        <guid isPermaLink="false">5f33957021b8f9692ae937a6</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 31 May 2013 11:00:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/confessional.png" medium="image"/>
        <content:encoded><![CDATA[ <p>After seeing a few of the <em><em>Confession</em></em> pages spring up on facebook and with the <em><em>StalkerSpace</em></em> craze a while ago I've decided to undertake a little proof of concept.</p><p><strong><strong>How?</strong></strong></p><p>I always enjoy getting an enormous whiteboard, a number of intelligent minds and hashing out ideas to see what sticks. Visually, the whiteboard ensures both a coolness factor for documentational pictures after the fact as well as a method of allowing people to air their thoughts when words fail.</p><p>Optimum configuration would have a semi-circle of tables around a central whiteboard; lots of drawings and everyone using <a href="https://en.wikipedia.org/wiki/Git_(software)?ref=adammalone.net">git</a>.</p><p><strong><strong>When?</strong></strong></p><p>Similar to the <a href="https://www.adammalone.net/drupal-sprint-weekend-wrap-up/">Drupal Sprint Weekend</a>, I can't wait to get stuck into creating a cool Drupal site with like minded people. With a concerted effort of developers, designer, site builders and end users for research purposes I'm confident we can knock it off in a weekend!</p><p><strong><strong>Why?</strong></strong></p><p>A lot of social networks are increasingly turning towards <a href="https://www.facebook.com/help/457469094277849/">full identification of users</a> although I think anonymity has a place in online society. Websites such as <a href="https://www.4chan.org/?ref=adammalone.net">4chan.org</a> being one of those strongholds where identification is more than optional. If only proof of concept we'll take <em><em>Confession</em></em> sites off facebook and into a forum where users can post on the site with a guarantee of anonymity and without the need of a moderator to take anonymous emails and post them to the site.</p><p>Another reason to try is to aggregate all <em><em>Confession</em></em> sites under one name on one cohesive website. Similar to the <a href="https://www.google.com.au/intl/en/about/products/?ref=adammalone.net">google suite of products</a>, rather than having sites sprinkled everywhere online, all sites will be in one location, ready for users to join and contribute to as they please.</p><p>So, here's to rapid building of sites and another blog post when we're done.</p><h3 id="disclaimer"><strong>Disclaimer</strong></h3><p>Admittedly, the site wasn't built in a weekend; although the majority of the architecting was. Whilst the site itself, logic and tweak modules were ready there's a lot of tiny things that are for the most part mundane yet essential to operation.</p><ul><li>Setting permissions;</li><li>Creating views;</li><li>Writing text for basic pages on the site;</li><li>Anything else remotely site building-y</li></ul><p>These are getting slotted in whenever we have time, along with a theme since however nice it is, Bartik likely won't cut it with the cool kids.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Feature creep and limitations ]]></title>
        <description><![CDATA[ One of the first things I&#39;d imagine a lot of freelancers experience is feature
creep.

What is feature creep?
As the simplest definition, feature creep refers to the myriad small changes
that go beyond the initial approved plan, introducing unplanned additions and
cause the project to drag.
As ]]></description>
        <link>https://www.adammalone.net/feature-creep-and-limitations/</link>
        <guid isPermaLink="false">5f33952021b8f9692ae93792</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 11 May 2013 21:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/dunes.png" medium="image"/>
        <content:encoded><![CDATA[ <p>One of the first things I'd imagine a lot of freelancers experience is feature creep.</p><h3 id="what-is-feature-creep"><strong>What is feature creep?</strong></h3><p>As the simplest definition, feature creep refers to the myriad small changes that go beyond the initial approved plan, introducing unplanned additions and cause the project to drag.<br>As a freelance developer, when a client has a list of features and a brief is prepared; that contract exists to protect both parties. The client is protected with a list of tasks that are required to be completed and the developer is protected with confirmed payment on receipt of said tasks.</p><p>When the required tasks are extended like the Sahara creeping into <a href="http://earthobservatory.nasa.gov/IOTD/view.php?id=6234&ref=adammalone.net">Nouakchot</a>, this is feature creep.</p><p>Feature creep can exist in a number of ways - from seemingly the most minor (creation of new views/panels) to larger features such as an alteration in site architecture.</p><h3 id="my-experience"><strong>My experience</strong></h3><p>It’s all too easy to fall into the trap of going beyond the brief to supply just one more thing in the effort of staying on the good side of the client.<br>With that however, comes the risk of continually being expected to provide just one more thing.<br>For companies on a budget, start-ups, or those inexperienced with brief writing there seems to be a risk of trying to get the most out of contractors.<br>Care should be taken not to fall into the trap of going along with it to appease the client.</p><p>With one of the first clients who sought my services to speed up and augment site experience I almost instantly fell into that trap, despite the fact that I was more than aware of the risk.</p><p>Although a brief had been created, with a list of requirements and tasks, the company (a start up) were developing rapidly and had big ideas with rapidly changing requirements. Tasks started off well enough with everything completed to specification. One by one, items were checked off the list and signed off on the client end.</p><p>Close to their launch deadline however, things were getting frantic for their team. Funders and investors were on their back and since our working relationship was good, their troubles were conveyed to me in our biweekly catch ups.<br>With this extra pressure on their back, I found myself stumbling into the unfortunate situation of agreeing to help where they were behind. I regretted it from a business standpoint almost instantly since there was no plan/brief to protect me. However, because I too, was excited about their launch after working with them closely and consistently, it served as a form of consolation that I was helping them out.</p><h3 id="how-it-ends"><strong>How it ends</strong></h3><p>At an estimate, despite contracting 40 hours I ended up working about 45-50 hours. Despite release being on time, those 5-10 were lost billable hours however you look at it.</p><p>Easy as it is to simply agree to more work off the cuff and take up a verbal agreement, I can’t stress enough the importance of getting any further hours contracted. Empathy is almost always a factor when developers agree to work reminiscent of feature creep. Getting drawn into the work isn’t a bad thing but it’s always worth taking a minute to check that you, as the developer, is covered for the work.<br>Luckily for me, subsequent to the completion of the project I was asked about the extra hours and compensated accordingly.</p><p>As a developer, it’s almost inevitable that you will experience feature creep in one of its forms. As long as work is agreed to in writing prior to continuing then there is no creep and you’ve done as you’re expected to do!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Hostname and mail woes ]]></title>
        <description><![CDATA[ A few months ago, I switched from using a Debian based distribution on my main
machine to Fedora.

I like to customise my machine a little prior to developing and creating local
sites to hack core [https://drupal.org/best-practices/do-not-hack-core], create
new modules [https://drupal.org/developing/modules] and ]]></description>
        <link>https://www.adammalone.net/hostname-and-mail-woes/</link>
        <guid isPermaLink="false">5f33944c21b8f9692ae9374e</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 10 Apr 2013 21:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/headers.png" medium="image"/>
        <content:encoded><![CDATA[ <p>A few months ago, I switched from using a Debian based distribution on my main machine to Fedora.</p><p>I like to customise my machine a little prior to developing and creating local sites to <a href="https://drupal.org/best-practices/do-not-hack-core?ref=adammalone.net">hack core</a>, <a href="https://drupal.org/developing/modules?ref=adammalone.net">create new modules</a> and generally contribute back to the community. Some of the changes are for performance and some of the changes are for vanity. An example of a performance change would be to install <a href="https://php.net/manual/en/install.fpm.php?ref=adammalone.net">PHP-FPM</a> and use it with <a href="https://httpd.apache.org/docs/2.2/mod/worker.html?ref=adammalone.net">Apache worker</a>. A vanity change would be to have an ascii <a href="https://drupal.org/node/9068?ref=adammalone.net">Druplicon</a> in my <a href="http://linux.about.com/library/cmd/blcmdl5_motd.htm?ref=adammalone.net">motd</a>; not strictly necessary but pretty cool.</p><p>One such change I made recently was to alter the hostname of my laptop since I had not set it on distro install. Unfortately I neglected to map the new hostname to my laptop in either /etc/hosts or /etc/sysconfig/network.</p><h3 id="the-first-realisation"><strong>The first realisation</strong></h3><p>After installing Drupal sites on my local machine I'm accustomed to getting welcome emails. As it happened, I started to not receive any emails. This didn't really strike me as odd or annoying as all they did was serve as a method of speedily filling up my inbox; they were not missed.</p><p>However when writing and testing implementations of the subscriptions/notifications modules later on the need to send emails became rather crucial; they were then missed.</p><h3 id="fixing"><strong>Fixing</strong></h3><p>Although rather confusing at first I eventually tracked down the missing mails in the maillog where I found a number of messages indicating postfix simply couldn't find the host typhonius-laptop. Rather sheepishly I pinged typhonius-laptop and waited an agonising number of seconds whilst slowly accepting what had happened; 'unknown host'.</p><p>Entering my shiny new hostname in my hosts file and a postfix restart fixed everything. So if ever I decide to change hostname and can't send mail immediately afterwards, I'll know what to do, whilst slapping my forehead over and over.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Showing China the banhammer ]]></title>
        <description><![CDATA[ Recently I started noticing a spike in the overall bandwidth on my server. A
little bit of investigation revealed some interesting albeit confusing details.

First contact
Towards the end of last month&#39;s billing cycle for my server I realised I&#39;d
actually overshot my bandwidth cap by ]]></description>
        <link>https://www.adammalone.net/showing-china-banhammer/</link>
        <guid isPermaLink="false">5f3393cd21b8f9692ae9372c</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 30 Mar 2013 10:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/china_network.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Recently I started noticing a spike in the overall bandwidth on my server. A little bit of investigation revealed some interesting albeit confusing details.</p><h3 id="first-contact"><strong>First contact</strong></h3><p>Towards the end of last month's billing cycle for my server I realised I'd actually overshot my bandwidth cap by a little bit. This wasn't anything unusual as I was always pushing it in previous months but decided to keep an eye on it at the start of the next month.</p><p>Two days into the next billing cycle and I'd used up half of my available bandwidth - oh dear, something is definitely wrong.</p><h3 id="investigation"><strong>Investigation</strong></h3><p>I figured the bandwidth spike could be due to a number of things:</p><ul><li>My website is suddenly crazy popular and generating huge amounts of traffic - <em><em>optimistic</em></em></li><li>Someone is hotlinking images on my site to somewhere else really popular - <em><em>unlikely</em></em></li><li>My server has been hijacked and is being used as a reverse proxy - <em><em>I hope not</em></em></li><li>Spammers are attempting to post comments an inordinate number of times - <em><em>possible</em></em></li><li>There's an error in the reporting <em><em>- unlikely</em></em></li></ul><p>So to narrow down the options, I employed the assistance of <a href="https://www.mammoth.net.au/?ref=adammalone.net">MammothVPS</a>'s performance page, <a href="https://www.cloudflare.com/?ref=adammalone.net">CloudFlare</a>, <a href="http://awstats.sourceforge.net/?ref=adammalone.net">AWStats</a>, <a href="https://en.wikipedia.org/wiki/Netstat?ref=adammalone.net">netstat</a>, and <a href="http://iptraf.seul.org/?ref=adammalone.net">IPTraf</a>. I was able to see the huge amounts of traffic were on port 80 and at a fairly consistent rate of 250kb/s-300kb/s for the past few days. A couple of netstat commands allowed me to see that the majority of apache's workers were connected to by a limited number of IPs.</p><p>Further perusal of the awstats pages for my server displyed both the large spike and the narrow range of IPs from which the requests originated.</p><p>The first thing to notice is that most of the requests in the top 10 list come from either <a href="http://www.infosniper.net/index.php?ip_address=120.43.255.255&map_source=1&overview_map=1&lang=1&map_type=1&zoom_level=7&ref=adammalone.net">120.43.0.0/16</a> or <a href="http://www.infosniper.net/index.php?ip_address=27.159.255.255&map_source=1&overview_map=1&lang=1&map_type=1&zoom_level=7&ref=adammalone.net">27.159.0.0/16</a>. From this, it doesn't take much to realise the origin was China and the Chinanet Fujian Province Network. What was also interesting was how <strong><strong>ALL</strong></strong> of the requests from this rather distributed attack were for a non-existent page on my site - another indication of malicious intent.</p><p>Unfortunately, with around 80000 pages visited and 230KB delivered for the 404 response of each request, there was an excess of bandwidth used, <a href="https://www.google.com.au/search?q=80000*230KB&ref=adammalone.net">17.5GB</a>.</p><h3 id="dealing-with-it"><strong>Dealing with it</strong></h3><p>The simplest method of blocking all these requests was not by adding the IPs to the banlist in Drupal (<a href="https://drupal.org/node/1570102?ref=adammalone.net">a rather ineffective method of restricting access</a>), rather to add them to iptables or to my CDN/optimiser, CloudFlare.</p><p>A number of the following commands: </p><pre><code class="language-bash">iptables -A INPUT -s 120.43.0.0/16 -j DROP</code></pre><p>with all of the different ranges on both my server and CloudFlare configuration pages really quietened things down and I have to consider myself far less popular now as I've not got nearly the same volume of bandwidth per day.</p><p>Bye bye <a href="https://www.google.com.au/search?q=256*256*6&ref=adammalone.net">400000 IPs</a>!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Drupal Distributions: Speeding up site deployment ]]></title>
        <description><![CDATA[ Last Thursday I presented Drupal Distributions: Speeding up site deployment for
the DrupalACT March [https://groups.drupal.org/node/285823] meet up.

A new lunchtime format allowed for a different audience to attend and I&#39;m
grateful to those who took the time out of their work day to ]]></description>
        <link>https://www.adammalone.net/drupal-distributions-speeding-site-deployment/</link>
        <guid isPermaLink="false">5f3392bb21b8f9692ae93700</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 17 Mar 2013 21:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/install_profile.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Last Thursday I presented Drupal Distributions: Speeding up site deployment for the <a href="https://groups.drupal.org/node/285823?ref=adammalone.net">DrupalACT March</a> meet up.</p><p>A new lunchtime format allowed for a different audience to attend and I'm grateful to those who took the time out of their work day to show their support!</p><p>The format of the presentation was a brief introduction to the three key substituents I use to create distributions:</p><ul><li><a href="https://drupal.org/project/features?ref=adammalone.net">Features</a></li><li><a href="https://drupal.org/project/profiler_builder?ref=adammalone.net">Profiler Builder</a></li><li><a href="https://drupal.org/project/drush?ref=adammalone.net">Drush Make</a></li></ul><p>There's a <a href="https://drupal.org/node/1943482?ref=adammalone.net">small issue</a> in Profiler Builder that is currently being addressed. If a profile is created using Profiler Builder without the <a href="https://drupal.org/project/profiler?ref=adammalone.net">Profiler</a> library and <a href="https://drupal.org/project/libraries?ref=adammalone.net">Libraries API</a> the subsequent installation will fail with a <a href="https://en.wikipedia.org/wiki/Screen_of_death?ref=adammalone.net#Other_screens_of_death">WSOD</a> unless</p><pre><code class="language-php">!function_exists('profiler_v2') ? require_once('libraries/profiler/profiler.inc') : FALSE; profiler_v2('profile name');</code></pre><p>is removed from the profile.install file.</p><p>I've included the files I used in the live demonstration so people may test the creation of distributions and usage of drush make themselves. The simplest method is to download <a href="/sites/adammalone/files/drush.make.txt">drush.make.txt</a> to a suitable location and then run:</p><pre><code class="language-bash">drush make drush.make.txt &lt;install location&gt;</code></pre><p>Navigate to the install location in your browser and install the distribution!</p><p>I've attached a <a href="/sites/adammalone/files/drupal_distributions_speeding_up_site_deployment.pdf">PDF copy</a> of the presentation to this post, as well as the <a href="https://goo.gl/gwgvb?ref=adammalone.net">google presentation</a> and comments regarding it are more than welcome here!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Wenatex: How I went to a free dinner ]]></title>
        <description><![CDATA[ A while ago I was invited to a Wenatex event and wrote a fairly popular article about it. Recently I received another invitation and decided to attend the event, take notes, obtain information and generally be a conduit through which people can ascertain more information about the company and what ]]></description>
        <link>https://www.adammalone.net/wenatex-how-i-went-free-dinner/</link>
        <guid isPermaLink="false">5f3391b821b8f9692ae936c2</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 06 Feb 2013 20:00:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/sleep_demonstration.png" medium="image"/>
        <content:encoded><![CDATA[ <p>A while ago I was invited to a <a href="http://wenatex.com.au/?ref=adammalone.net" rel="nofollow">Wenatex</a> event and wrote a <a href="https://www.adammalone.net/post/wenatex-how-i-was-invited-free-dinner">fairly popular article about it</a>. Recently I received another invitation and decided to attend the event, take notes, obtain information and generally be a conduit through which people can ascertain more information about the company and what the free invitation really entails.</p><p>The invitation (pictured right)</p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/square_thumbnail/public/wenetex_invite.png" class="kg-image" alt loading="lazy"></figure><p>gave a tiny bit of information but no real details of Wenatex itself; much as was the case with the previous invite I received and blogged about.</p><p>Shortly before 7pm, I arrived at the venue, the <a href="http://www.anu.edu.au/unihouse/functions/funcvenues.html?ref=adammalone.net">Torrance Room</a> within <a href="http://www.anu.edu.au/unihouse/?ref=adammalone.net">University House</a> at the <a href="http://anu.edu.au/?ref=adammalone.net">Australian National University</a>. It was fitted out to banquet spec; twelve chairs in an 'L' shape with the focal point being a single bed and Wenatex branded display poster. My instantaneous reaction was favourable as it appeared a lot of time and effort had gone into attempting to impress event guests.</p><p>Totally, there were four attendees and the presenter; a man who made an effort to get to know us and to tell us a little about himself.</p><p>The night started with around 15-20 minutes of introduction to Wenatex the company. It covered who they were, what they did, information about how the company was founded and their mission. The presenter was knowledgeable about the company and seemed well practiced at the spiel. Although I had done some prior research into the company there was additional information that I garnered from the intro.</p><p><strong><strong>The dinner</strong></strong><br></p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/square_thumbnail/public/dinner_event.jpg" class="kg-image" alt loading="lazy"></figure><p>Following the introduction came the free dinner, as offered. Wenatex are not the caterer so I don't feel it fair to besmirch the quality of the meal; they after all did not cook it. I was served some kind of chilli/rice dish with sides of corn and fresh bread. It was edible but even as a recipe following amateur I could probably create better. They also served a fried calamari/prawn tempura attempt with a side of chips and salad to other event guests. Again I don't think it's overly critical to say it was nothing special.</p><p>That being said, Wenatex are not in the food business and they change venue (hence food) with each event so your mileage may vary.</p><p><strong><strong>The talk</strong></strong></p><p>I can't say I <em><em>learned</em></em> a huge amount of information about sleeping from the talk itself. I got the feeling often that the questions posed by the speaker were leading with the sole purpose of making the participant think the Wenatex product is the only 'sleep system' appropriate for them.</p><ul><li>Would you spend a third of your life being uncomfortable on purpose?</li><li>When you wake up have you ever felt tired?</li><li>Has anyone woken up with aches ever?</li></ul><p>My opinion of it was that there was a fair amount of pseudoscience and confidence play in order to get the eventual sale. There was a huge amount of information thrown at me and I had no access to fact checking sources so aside from the things that I <em><em>knew</em></em> were incorrect I had to take everything at face value. I found a <a href="https://mega.co.nz/?ref=adammalone.net#!iQwSCAwD!ThCCej-llD86XDZXtxTlfgAzOfABTMzSZjVilrCK8II">link to one of their seminar booklets</a> which I believe is available on their website. This should give readers an idea of the sort of information given out.</p><p>One thing I did notice about the talk was that from the start the emphasis was decidedly not on sales. The idea of the talk appeared to take the standpoint that the presenter is a 'sleep expert' and they're going to give tips on how to improve sleeping habits. Each person was encouraged to fill in a questionnaire which was then analysed and recommendations given on improving quality of sleep. Since I have no trouble sleeping with my only vice being that I regularly take less 4-6 hours a night I was encouraged to 'Make the most of the sleep with a better bed'. If an attendee suffered from temperature variation it was advised that the sleep system would remedy that. If there was disruption via snoring, the orthopaedic pillow would solve that. The audience were lead to draw their own conclusions from the evidence given and had that all been taken on face value as true; the obvious conclusion drawn would be that Wenatex are the <strong><strong>only</strong></strong> bed manufacturer to supply adequate beds.</p><figure class="kg-card kg-image-card"><img src="https://www.adammalone.net/sites/adammalone/files/styles/square_thumbnail/public/free_gifts.jpg" class="kg-image" alt loading="lazy"></figure><p><strong><strong>The free gift</strong></strong></p><p>Temptations of $50 free gifts are bound to ensnare some people. The gifts offered were either:</p><ul><li><a href="http://amazinggiftshop.com.au/Images/items/amazing-knife-sharpener-sml4.png?ref=adammalone.net">Knife sharpener</a></li><li><a href="http://amazinggiftshop.com.au/images/items/Ped55_1tube.jpg?ref=adammalone.net">Skin Cream</a></li></ul><p>In addition the attendee was given a $30 voucher to shop at <a href="http://amazinggiftshop.com.au/?ref=adammalone.net" rel="nofollow">http://amazinggiftshop.com.au</a>. My definition of a gift does not stretch to a voucher for <em><em>money off</em></em> but as an observer only interested in the experience it was not my place to stir, rather just to report back the facts.</p><p><strong><strong>The bed</strong></strong></p><p>I was able to lie down on the bed and test it toward the end of the seminar. The sheer length of time taken to 'learn' everything about sleep meant that by this time I was ready to sleep for real. It was no doubt a comfortable bed to lie in., but I would guess it all boils down to personal preference, and budget. My personal opinion is that there is a fair degree of <a href="https://en.wikipedia.org/wiki/Snake_oil?ref=adammalone.net">snake oil</a> involved with certain aspects of the entire package; namely the herbal inlay but I would not stand in the way of any person wishing to buy a bed; indeed a couple attending did buy one.</p><p>The prices were shown at the very end of the seminar. Again, I managed to find an online copy of the <a href="https://mega.co.nz/?ref=adammalone.net#!2AwjUSbY!Xrseug16t3qzLV6qNfnt0AJWz_Z3iCrb7DCJ8SA-y3Q">seminar price list</a> for perusal. As it states, seminar prices are at a 25% reduction which is very compelling in a hard sell environment.</p><p><strong><strong>Thoughts</strong></strong></p><p>However interesting the talk may be to people attending I was strongly reminded of my <a href="https://www.adammalone.net/post/time-money">Time = Money post</a>. Spending four hours of my time going to a seminar on sleep is simply not worth my time for the '$50 free gift' and at a guess a $5 dinner. I do not however, believe that it is a scam. There is no requirement to buy anything and attending the event is obligation free. I ended up leaving 4 hours poorer, with a full stomach and a knife sharpener. If you're looking for a new bed consider this company in the same way you consider all others. The only major difference is how they market and sell their beds, with suspicion perhaps being gained by this slightly unusual tactic.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Drupal Sprint Weekend Canberra March 9th-10th 2013 ]]></title>
        <description><![CDATA[ Following the post on g.d.o [https://groups.drupal.org/node/277768], I have
decided to host a sprint in Canberra! We have two potential venues depending on
the number of people who wish to attend, both very close to the city, with
excellent transport links and food close ]]></description>
        <link>https://www.adammalone.net/drupal-sprint-weekend-canberra-march-9th-10th-2013/</link>
        <guid isPermaLink="false">5f33916121b8f9692ae936ae</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 26 Jan 2013 01:15:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/canberra_sprint.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Following the <a href="https://groups.drupal.org/node/277768?ref=adammalone.net">post on g.d.o</a>, I have decided to host a sprint in Canberra! We have two potential venues depending on the number of people who wish to attend, both very close to the city, with excellent transport links and food close by.</p><p>At present I plan on being available both days but pending potential attendee numbers that may change to just one day. Make your intention to participate known in either the comments on this site or on the <a href="https://groups.drupal.org/node/279278?ref=adammalone.net">g.d.o event</a>.</p><p>Everyone is welcome; if you have built a site in Drupal, you can contribute. We will split into pairs and work on Drupal core issues. Bring your laptop. If possible, install git before coming and git clone Drupal 8 core. For new folks: you can get a head start also by making an account on <a href="https://drupal.org/?ref=adammalone.net">Drupal.org</a> and taking a look at the <a href="http://drupalladder.org/?ref=adammalone.net">Drupal Ladder</a>.</p><p>The code sprint will take place at the Australian Medical Association on both days. Since it's a weekend the building will not be open to the public so attendees will need to call 0430365015 to gain entrance.</p><p>I'll be present from 9am until late on both days so feel free to drop in and participate whenever.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Too much caching to code ]]></title>
        <description><![CDATA[ Traditionally, and sensibly, one develops a site before attempting to optimise
the server. The reason behind this being that simple changes to the site
(modules or themes) that would require a page refresh to take effect could end
up requiring either a site cache clear or even an Apache/PHP ]]></description>
        <link>https://www.adammalone.net/too-much-caching-code/</link>
        <guid isPermaLink="false">5f33911221b8f9692ae9368f</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 19 Jan 2013 07:50:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/apc.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Traditionally, and sensibly, one develops a site before attempting to optimise the server. The reason behind this being that simple changes to the site (modules or themes) that would require a page refresh to take effect could end up requiring either a site cache clear or even an Apache/PHP restart. After recently optimising a site on a server ready for production I had to undo a number of optimisations so development could continue, inspiring me to write this.</p><p><strong><strong>How much caching are we talking?</strong></strong></p><p>With a heavily optimised site, it's likely <a href="https://memcached.org/?ref=adammalone.net">Memcache</a> will be installed, Apache will be tuned and Drupal will have all the boxes ticked within the 'admin/config/development/performance' settings page.</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/caching.png" class="kg-image" alt></figure><p>The combination of these factors will in itself require the cache to be cleared when altering JS/CSS or even content on pages (since they are cached for anonymous). If the fantastic <a href="https://drupal.org/project/authcache?ref=adammalone.net">authcache</a> module is installed, there are implications of cached pages for authenticated users too!</p><p>If <a href="https://pecl.php.net/package/APC?ref=adammalone.net">APC</a> is enabled there is the further risk that changes will not take effect until the next time apache/PHP is restarted. <a href="https://php.net/manual/en/apc.configuration.php?ref=adammalone.net#ini.apc.stat">The documentation</a> is accurate when advising the user to take care when changing the apc.stat setting. With apc.stat = 0, the files themselves are stored in the APC cache. So even if you make changes to files, they simply will not show the changes until either the APC cache is cleared or apache/PHP restarted.</p><p>Keeping files in APC cache can lead to issues and questions as to why changes are not taking effect; but in my experience production sites which do not require changing gain in performance such that the setting is worthwhile.</p><p><strong><strong>The right way</strong></strong></p><ul><li><em><em>Develop then optimise the server.</em></em> Don't burden yourself with having to clear caches and restart things with every little successive change. Not only would it massively explode the length of time it would take to develop anything; it'd be damned annoying.</li><li><em><em>Always have site optimisation in mind</em></em>. Just because you're developing and creating a site, doesn't mean you can fill pages with enormous views chock full of <a href="http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html?ref=adammalone.net">JOIN</a> queries. If you do that, you're going to have a bad time regardless of however you try to optimise the server.</li><li><em><em>Optimise for the site in question.</em></em> Plenty of <a href="http://www.chadcf.com/blog/drupal-performance-and-scaling-part-1-anonymous-users?ref=adammalone.net">other</a> <a href="https://drupal.org/node/326504?ref=adammalone.net">people</a> have <a href="http://postmodern.nzpost.co.nz/2012/05/17/making-our-site-faster-with-varnish/?ref=adammalone.net">written</a> about how to optimise Drupal. Use the resources but target them to your site; a small blog site with limited readers will not likely gain anything from CDNs, varnish and memcache.</li></ul> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Omega subtheme nuances ]]></title>
        <description><![CDATA[ Omega comes with two default theme grids:

 * 960px - commonly used on desktop sites [http://960.gs/].
 * fluid - more often seen on mobile sites for full width display
   [https://en.m.wikipedia.org/?useformat=mobile].

I recently participated in the creation of a site using Omega as a basetheme ]]></description>
        <link>https://www.adammalone.net/omega-subtheme-nuances/</link>
        <guid isPermaLink="false">5f33903621b8f9692ae93654</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 06 Jan 2013 20:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/omega-html5.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Omega comes with two default theme grids:</p><ul><li><strong><strong>960px</strong></strong> - commonly used on <a href="http://960.gs/?ref=adammalone.net">desktop sites</a>.</li><li><strong><strong>fluid</strong></strong> - more often seen on mobile sites for <a href="https://en.m.wikipedia.org/?useformat=mobile&ref=adammalone.net">full width display</a>.</li></ul><p>I recently participated in the creation of a site using Omega as a basetheme with the following theme architecture:</p><p>Since it was desireable to have a <strong><strong>960px grid</strong></strong> for the desktop site and <strong><strong>fluid grid</strong></strong> for the mobile site yet also both desktop and mobile subthemes had to inherit basic CSS from the Default Site Theme (background colour, text colour, margins/paddings etc) the default theme must have CSS files for the subthemes to inherit from. A fluid theme <strong><strong>will not </strong></strong>inherit anything from the custom basetheme if there are no fluid CSS files in the custom basetheme. The same is true of 960px subthemes.</p><p>For Omega subthemes the answer is simple, yet sparsely documented. At the very minimum include within your overall <em><em>Default Site Theme</em></em> the following files:</p><ul><li>custom-themename-alpha-default-normal.css</li><li>custom-themename-alpha-fluid-normal.css</li></ul><p>The default (960px grid CSS file) can also have -wide and -narrow derivatives, responsive to <a href="https://www.w3.org/TR/css3-mediaqueries/?ref=adammalone.net">@media queries</a>.</p><p>It should also be noted that using the global.css files in subthemes can cause non-inheritance of parent global.css unless the following is adhered to:</p><ul><li>Subtheme global.css are renamed to custom-themename-global.css</li><li>Any declarations of css[global.css] in the subtheme .info file are renamed to css[custom-themename-global.css]</li><li>Within the settings for the subtheme, under 'Toggle Styles', the css file is ticked to be included.</li></ul><p>As usual, comments and questions below!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ New Year 2013 ]]></title>
        <description><![CDATA[ Another year passes, another mayan apocalypse
[https://edition.cnn.com/2012/12/20/world/doomsday-coming/index.html] averted,
another year to add to my tally of those lived through.

As usual many of us are busy making predictions and new years resolutions that
inevitably we shall not keep to. I& ]]></description>
        <link>https://www.adammalone.net/new-year-2013/</link>
        <guid isPermaLink="false">5f338fe421b8f9692ae9363a</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 03 Jan 2013 20:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/fireworks.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Another year passes, another <a href="https://edition.cnn.com/2012/12/20/world/doomsday-coming/index.html?ref=adammalone.net">mayan apocalypse</a> averted, another year to add to my tally of those lived through.</p><p>As usual many of us are busy making predictions and new years resolutions that inevitably we shall not keep to. I've made a few of my own and will document them here as a record to look back upon at the end of the year.</p><p>The first such resolution is tied to this belief that I will still have this blog and will continue writing. It's both a good outlet <strong><strong>and</strong></strong> a fine repository for my ideas; lest I forget what I work on.</p><p>With this blog and server comes more Drupal, another thing I'm keen to remain ontop of in 2013. I'll be attending <a href="https://sydney2013.drupal.org/?ref=adammalone.net">DrupalCon</a> in Sydney soon, which will be a fantastic boost to my knowledge and understanding. Perhaps acting the same way that that <a href="http://2012.drupaldownunder.org/?ref=adammalone.net">Drupal DownUnder</a> did last year, as a catalyst that promotes passion for the subject and keeps me hungry to learn more!</p><p>There are modules <a href="https://drupal.org/user/1295980/track/code?ref=adammalone.net">I'm still working on</a>, ideas I'd love to make into contrib and of course <a href="https://drupal.org/community-initiatives/drupal-core?ref=adammalone.net">Drupal 8</a>. Plenty of things to keep me occupied until at least 2014.</p><p>I also have two domain names that remain underutilised; it may be a little unlikely due to other pressures but <a href="http://typhonius.com/?ref=adammalone.net">typhonius.com</a> and <a href="http://glo5.com/?ref=adammalone.net">glo5.com</a> deserve a little love. Ideas for how I can use them are always appreciated!</p><p>Finally, and almost like an aside, I feel it appropriate to proclaim my intention to (re)start learning another language. For one semester at University I started studying Chinese; a less than trivial language. It would be remiss of me to not admit a small reason for continuing is the ability to respond accordingly when I'm being talked about in hushed mandarin. A larger reason, however, is my enjoyment of languages and a desire to lose my monolingual trait.</p><p>I'd be interested to hear if other people have resolutions they wish to keep. I offer the comment section of this blog post as a record to prove your confidence, especially if it's the same as one of mine!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Diversifying ]]></title>
        <description><![CDATA[ Some backstory

Whilst being involved with Drupal, I&#39;ve been employed in a number of roles, each
with different responsibilities and tasks.

 * A web shop developing numerous sites for clients with very particular
   requirements and to a strict deadline.
 * Under the employ of NFP sites for friends, family, local ]]></description>
        <link>https://www.adammalone.net/diversifying/</link>
        <guid isPermaLink="false">5f338f9f21b8f9692ae93627</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 02 Jan 2013 07:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/cups.png" medium="image"/>
        <content:encoded><![CDATA[ <p><strong><strong>Some backstory</strong></strong></p><p>Whilst being involved with Drupal, I've been employed in a number of roles, each with different responsibilities and tasks.</p><ul><li>A web shop developing numerous sites for clients with very particular requirements and to a strict deadline.</li><li>Under the employ of NFP sites for friends, family, local clubs and organisations I'm affiliated with.</li><li>An organisation that runs many large, multi-thousands of users, multi-thousands of nodes websites. The ongoing maintenance, management and upgrades, enhancements and general oversight of which is a mammoth task in itself.</li></ul><p>I've found that the more I learn about specific topics, the more specialised I become and with that falls the risk of at some point being obseleted. By diversifying my knowledge it's my aim, and indeed personal duty, to remain up to date and relevant to whatever the current trend is.</p><p><strong><strong>With this in mind</strong></strong></p><p>It is with this in mind then, that I have decided to allow the skills I have developed to be used for freelance purposes outside of work hours. Drupal is something you really have to immerse yourself in to even start to understand. By being involved with all kinds of sites from many different clients you start to appreciate both its scope and gain great knowledge in contrib modules; a skill much underrated!</p><p>By the end of January I'll have contributed to around 4 independent sites. The owners/developers/managers of said sites have reached out and engaged me, occupying my already rather full schedule. I'd rather not sit around doing nothing over the xmas break and <a href="https://drupal.org/project/issues/drupal?ref=adammalone.net">Drupal 8 can only keep me occupied for so long</a>!</p><p>Wish me luck with my endeavors and advice is always welcome in the comments!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Happy Holidays 2012 ]]></title>
        <description><![CDATA[ Whether you&#39;re a follower of religion, or a subscriber to dieties, it must be
agreed that it&#39;s great how so many people get together and celebrate at roughly
this time of year. Any excuse to give, receive, take some good food, catch up
with friends and ]]></description>
        <link>https://www.adammalone.net/happy-holidays-2012/</link>
        <guid isPermaLink="false">5f338f4921b8f9692ae93613</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 25 Dec 2012 12:37:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/xmas.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Whether you're a follower of religion, or a subscriber to dieties, it must be agreed that it's great how so many people get together and celebrate at roughly this time of year. Any excuse to give, receive, take some good food, catch up with friends and relatives and generally be happy is quite frankly, a good excuse.</p><p>Happy holidays everyone!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Time &#x3D; Money ]]></title>
        <description><![CDATA[ The reason I&#39;ve been uncharacteristically quiet this month (nigh on two weeks),
is due to moving residence. It&#39;s a minor jump to the title of this blog post but
trust me it&#39;s all linked!

The lease of my previous place of residence ended on ]]></description>
        <link>https://www.adammalone.net/time-money/</link>
        <guid isPermaLink="false">5f338f0521b8f9692ae935fc</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 15 Dec 2012 02:16:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/time_money.png" medium="image"/>
        <content:encoded><![CDATA[ <p>The reason I've been uncharacteristically quiet this month (nigh on two weeks), is due to moving residence. It's a minor jump to the title of this blog post but trust me it's all linked!</p><p>The lease of my previous place of residence ended on the 10th of December and as such the ten days prior were devoted to removing objects, cleaning and sanctifying the earth in and around the house. Sufficed to say it wasn't a very pleasant task but indeed <em><em>somebody had to do it</em></em>.</p><p>Whilst scrubbing individual kitchen floor tiles and polishing both nooks <strong><strong>and</strong></strong> crannies I was afforded an amount of time to think. One thought kept reentering my consciousness and surfacing; Is this the best use of my time?</p><p><strong><strong>How much is your time worth?</strong></strong></p><p>Whilst I was in <a href="https://en.wikipedia.org/wiki/Sixth_form?ref=adammalone.net">VI form</a> fellow students and myself were discussing wages earned at our Saturday jobs. Our Biology teacher; a universally adored and respected man named John Wrighton rather brazenly, we thought, exclaimed how he would not get out of bed for what we were being paid, ~£5.50 an hour.</p><p>Now, however, neither would I.</p><p>In a <a href="/post/diversifying">post I've written</a> concurrently, I talk about diversifying my abilities and skills. This is not, for the most part, a simple process. Learning new things takes time; time I do not have if I'm cleaning cobwebs from guttering. However that time and the skills learned can lead to jobs in which the position requires broader, more varied or a deeper skillset. These positions invariably come with an attractive pay rate befitting the smaller pool of applicants with requisite abilities. Hence, slightly indirect although possible reward for putting in hours to learn.</p><p>I spent around a week in total cleaning, with some crazy late nights all in the hope of a full bond payout. In hindsight, I think my time would have been better spent doing almost <strong><strong>anything </strong></strong>else and the cost of hiring a professional cleaning outfit to whip through the property would have paled into insignificance in the grand scheme of things.</p><p>I feel almost arrogant in saying that my time is more important than the cleaning of a house prior to ending a lease but at the same time, I'm an ineffective non-professional cleaner. Is my time not better invested becoming more effective and more professional in my current career?</p><p><strong><strong>My caveats</strong></strong></p><p>It should be noted that while it seems like the easy way out, often, to pass on tasks to other people in return for a little cash. One should take the time to assess if their time is worth the money transferred for tasks. My time is not worth so much that I can dine out every night - notwithstanding the fact I enjoy cooking. My time is also not worth having a maid service my apartment every week. However, the once in a fairly long time movement of my entire life... I think I can ask for help with that!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Core contributor without even knowing ]]></title>
        <description><![CDATA[ A short while ago I had a conversation with webchick [http://webchick.net/] in 
#drupal-contribute [irc://irc.freenode.net/drupal-contribute] about a Drupal 8
issue [https://drupal.org/node/203955] I was interested in. The topic of getting
a patch committed came up and I expressed my desire to, one ]]></description>
        <link>https://www.adammalone.net/core-contributor-without-even-knowing/</link>
        <guid isPermaLink="false">5f338eb521b8f9692ae935de</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 02 Dec 2012 07:12:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/typho-drupal.png" medium="image"/>
        <content:encoded><![CDATA[ <p>A short while ago I had a conversation with <a href="http://webchick.net/?ref=adammalone.net">webchick</a> in <a href="irc://irc.freenode.net/drupal-contribute">#drupal-contribute</a> about a <a href="https://drupal.org/node/203955?ref=adammalone.net">Drupal 8 issue</a> I was interested in. The topic of getting a patch committed came up and I expressed my desire to, one day, earn a <a href="https://drupal.org/node/21778?ref=adammalone.net">commit credit</a>. Although webchick believed that I already had credit to my name I assured her that was not the case.</p><p><strong><strong>A revelation</strong></strong></p><p>Even more recently whilst in the git log searching for when a particular issue was committed, I decided to search for my username; just in case I had missed something.</p><p><strong><strong>Oh!</strong></strong></p><p>A quick search of that wondrous commit key revealed the commit in the <a href="http://drupalcode.org/project/drupal.git/commit/e06c461?ref=adammalone.net">Drupal git repository</a> where I am now immortalised as <em><em>helping out a little bit</em></em>. <a href="https://drupal.org/node/304540?ref=adammalone.net">The issue</a> addresses the problems arising after enabling themes that require functions from nonexistent base themes. Downloading <a href="https://drupal.org/project/rubik?ref=adammalone.net">Rubik</a> and setting it as the theme without having a copy of <a href="https://drupal.org/project/tao?ref=adammalone.net">Tao</a> is a demonstrable example of what the issue aimed to address.</p><p>The unfortunate and slightly bittersweet thing is that the patch committed as a resolution to the issue didn't quite resolve it entirely as I explained in <a href="https://drupal.org/node/304540?ref=adammalone.net#comment-6385506">this comment</a>. Hopefully it'll get a bit more attention soon and I may even get another commit credit to my name.</p><p>Until then, I'm happy to be able to count myself amongst the legion of Drupal core contributors.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Strings to my bow ]]></title>
        <description><![CDATA[ Last week I was lucky enough to gain maintainership of another Drupal module; 
Block Title Link [https://drupal.org/project/block_titlelink].

As long as there is time in the day and I haven&#39;t fallen asleep at my desk I
find it hard not to learn something, fix ]]></description>
        <link>https://www.adammalone.net/strings-my-bow/</link>
        <guid isPermaLink="false">5f338e5021b8f9692ae935c5</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 22 Nov 2012 21:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/blocktitle.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Last week I was lucky enough to gain maintainership of another Drupal module; <a href="https://drupal.org/project/block_titlelink?ref=adammalone.net">Block Title Link</a>.</p><p>As long as there is time in the day and I haven't fallen asleep at my desk I find it hard not to learn something, fix things, break modules and experiment with other people's inventions. Whether it's Drupal or something else I like to take things apart (metaphorically when talking about code) and see how it all fits. Even if I don't understand it for the most part the general gist stays with me.</p><p>So after seeing that the module required someone new to take the reigns <a href="https://drupal.org/node/1839004?ref=adammalone.net">I stepped forward</a> and was offered the opportunity to show what I could do!</p><p>So if you're new to Drupal, have written some custom modules of your own and want to start contributing back to the community there are a couple of good pathways:</p><ul><li>Think of a unique concept, write a module for it and <a href="https://drupal.org/node/1011698?ref=adammalone.net">get yourself approved</a> as a maintainer.</li><li>Write a few patches for an active module, slowly become ingratiated with the maintainer and offer to co-maintain.</li><li>Find a dead module and request the ownership of it in the <a href="https://drupal.org/project/issues/webmasters?component=Project+ownership&ref=adammalone.net">Drupal webmasters issue queue</a>.</li></ul><p>I'm a <strong><strong>huge</strong></strong> advocate of giving back to the community and <strong><strong><a href="https://en.wikipedia.org/wiki/Pay_it_forward?ref=adammalone.net">paying it forward</a></strong></strong>. If nobody shares then we all limit the next generation of people who share which in turn limits the evolution of the project.</p><p><strong><strong>The future of the Block Title Link module</strong></strong></p><p>I've blasted through a few of the Drupal 7 RTBC and 'low hanging fruit' issues which are now fixed in the latest dev release. These will shortly be released in the 7.x-1.4 release of the module.</p><p>I've mentally planned how to solve the remaining issues in the queue and will do so for Drupal 7 first before the new year. Following this I'll branch the Drupal 6 version of the module and create a backport from the Drupal 7 version so the features are as similar as they can be with the only differences likely to be API based.</p><p>It's a module I've used, it's useful, people rely on it, so why don't I finish adding all the random features that users want!</p><p><em><em>Watch as my bold claims to finish this soon turn around and bite me!</em></em></p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Advanced networking ]]></title>
        <description><![CDATA[ I overheard people talking about business cards today which made me start to
think back to my experiences at Drupal Downunder 2012 and the art of advanced
networking.

In the space of almost three days I accrued a wallet full of business cards. I
can only imagine some were interested ]]></description>
        <link>https://www.adammalone.net/advanced-networking/</link>
        <guid isPermaLink="false">5f338de121b8f9692ae935b0</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 19 Nov 2012 01:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/cards.png" medium="image"/>
        <content:encoded><![CDATA[ <p>I overheard people talking about business cards today which made me start to think back to my experiences at Drupal Downunder 2012 and the art of advanced networking.</p><p>In the space of almost three days I accrued a wallet full of business cards. I can only imagine some were interested in me as a client, others as an employee, perhaps some wanted to expand their twitter following and I'm sure there must have been one who just wanted me to have their number.</p><p>As if to personify the well known saying:</p><blockquote>It's not what you know, it's who you know.</blockquote><p>The business card is the old fashioned way to sequester another soul into your network. To create a connection which could have the opportunity of developing into a symbiotic relationship to rival that of rhinos and oxpeckers.</p><p>Does the business card still matter though? Can we not do away with these wallet fillers and instead connect on <a href="https://www.linkedin.com/profile/view?id=180805162&ref=adammalone.net">linkedin</a>, <a href="https://twitter.com/adammalone?ref=adammalone.net">twitter</a>, <a href="https://plus.google.com/103687647706645691195/posts?ref=adammalone.net">google plus</a> or simply by email? In so many ways but perhaps the most important when it comes to meeting people, we can. I would venture to say the only thing holding the business card's existence in its hands is the <em><em>face to face</em></em>.</p><p>Hiding being a monitor with faceless emails flying between parties can make it hard to gauge a person; to really get a measure of them. Tiny afflictions visible only when within the two metre comfort zone; true passion in the words being spruiked or merely a façade displayed in an effort to deceive; detection of 'chemistry' between parties. All of those and more are a result of the <em><em>face to face</em></em>.</p><p>Unfortunately, being in front of others does not come easily to some people so this rather intimate method is their bane. I'm a fan of it and enjoy the spotlight however, so perhaps I'll allow the business card for a little longer; its days are numbered though.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Replicating the Blogger blog archive in Drupal ]]></title>
        <description><![CDATA[ One thing I&#39;ve noticed a lot of people like about Blogger
[https://www.blogger.com/] is the block displaying an archive of posts that
expands showing posts in each month.

Of course the easiest method is to let views do this for you; one of the
preconfigured views ]]></description>
        <link>https://www.adammalone.net/replicating-blogger-blog-archive-drupal/</link>
        <guid isPermaLink="false">5f338d1c21b8f9692ae9358f</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 14 Nov 2012 09:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/expandy_archive.png" medium="image"/>
        <content:encoded><![CDATA[ <p>One thing I've noticed a lot of people like about <a href="https://www.blogger.com/?ref=adammalone.net">Blogger</a> is the block displaying an archive of posts that expands showing posts in each month.</p><p>Of course the easiest method is to let views do this for you; one of the preconfigured views is indeed a monthly archive block, perfect for displaying all the posts for a particular month-year combination. Views also provides a page version of this functionality which I have taken the liberty of enabling here.</p><p>This is all right, but what about having some kind of jQuery interaction, instead of a full page load to drill down into further detail what if all the posts were accessible in a dynamic list that expanded when the user clicks on the month.</p><p><strong><strong>Implementation</strong></strong></p><p>The functionality I've described above is represented exactly, at the time of writing, in the <em><em>Blog Archive</em></em> block in the right sidebar. Rather than repeat the methods I've learnt (and modified/expanded upon) describing similar functionality <a href="https://drupal.org/node/825052?ref=adammalone.net">here</a> or in this <a href="http://www.only10types.com/2010/12/drupal-collapsible-blog-archive-like.html?ref=adammalone.net">blog post</a>, ironically about Drupal and written on Blogger. I will instead simply provide readers with two files that should enable them to quickly implement the same block on their site without following a number of finicky step by step instructions.</p><p>I will however provide steps:</p><ol><li>Import <a href="/sites/adammalone/files/blogger_archive_view_export.txt">this views export</a> into views.</li><li>Place this <a href="/sites/adammalone/files/archive-block.js.txt">javascript file</a> in your theme folder. NB - change the file extension to .js before doing this. The file has .txt extension for security reasons.</li><li>Ensure the javascript file is used by the theme by including the following in your theme info file.</li></ol><pre><code class="language-yaml">scripts[] = path/to/archive-block.js</code></pre><p>One thing I wanted especially was for non-node pages to expand the most recent month and for all node pages to expand the month the specific node was posted. Of course, if you have some js/jQuery knowledge the sky is the limit!</p><p><strong><strong>Update</strong></strong></p><p>In your views configuration for the 'Content: Post Date' field in the rewrite results section ensure the box that says 'Rewrite the output of this field' is ticked and within the box check it reads:</p><pre><code class="language-css">&lt;span class="collapse-icon"&gt;►&lt;/span&gt;</code></pre><p>If you can't do that then comment here with tales of your woes, if you can then let me know if you like it!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ By invitation only ]]></title>
        <description><![CDATA[ Whether it&#39;s for fun, tinkering around or for an actual purpose I like to keep
up to date with contrib modules [https://drupal.org/project/modules]. It gives
me the experience to say:

&gt; There&#39;s a module for that!
And actually know the module and functionality. ]]></description>
        <link>https://www.adammalone.net/invitation-only/</link>
        <guid isPermaLink="false">5f338ccd21b8f9692ae93575</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 11 Nov 2012 21:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/invite.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Whether it's for fun, tinkering around or for an actual purpose I like to keep up to date with <a href="https://drupal.org/project/modules?ref=adammalone.net">contrib modules</a>. It gives me the experience to say:</p><blockquote>There's a module for that!</blockquote><p>And actually know the module and functionality. It's even cooler when you can say <a href="https://www.reddit.com/r/drupal/comments/12fckc/before_i_host_a_site_what_should_i_make_sure_i_do/c6ul0ut?ref=adammalone.net">you're a co-maintainer of the module</a> you're recommending; and say it with pride!</p><p>My latest experimentation was with the <a href="https://drupal.org/project/invite?ref=adammalone.net"><em><em>Invite module</em></em></a>. The purpose of the module is to allow a site to organically grow by a system of invitations. Existing users of the site are allowed to invite their friends thus creating a sort of viral growth similar to the way <a href="https://gmail.com/?ref=adammalone.net">gmail</a> and <a href="https://www.facebook.com/">facebook</a> used to work.</p><p>After installing and having a look around I came across what I thought was a bit of an obvious limitation of the module; invitations are only granted to <em><em>roles</em></em> rather than <em><em>individual users</em></em>. This is fine if you want to allow an administrative role to have unlimited invites, or if the desire is to just give all users x invites full stop.</p><p>However, it seemed to me to be more a limitation than a useful feature. In good Drupal style I decided to <a href="https://drupal.org/node/129339?ref=adammalone.net">write a patch</a>. Bearing in mind the issue I attached it to is over five years old it seems either nobody else wanted the feature or nobody put aside the time to write it.</p><p>It didn't take too long to write and seems to do the trick. If the 'per-user' mode is selected in invite configuration, users may have invites granted and rescinded on an individual, per-user basis.</p><p>It may not be for everyone and it doesn't look like the patch is getting committed any time soon but sometimes I just can't help but make things better! If you're keen on seeing this functionality built into the invite module <a href="https://drupal.org/node/129339?ref=adammalone.net#comment-form">post your thoughts on that issue</a>. If you have similar Drupal stories of module improvement or want to comment on this post do so here.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ An education for free ]]></title>
        <description><![CDATA[ Obtaining an education is a highly sought after quest that a large number of parents desire for their offspring. However with life, short as it is, intertwined and forever progressing with the meandering path of time, an education can easily be missed.

Learning in the traditional sense of primary, secondary ]]></description>
        <link>https://www.adammalone.net/education-free/</link>
        <guid isPermaLink="false">5f338c7121b8f9692ae93552</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 08 Nov 2012 21:30:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/edx.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Obtaining an education is a highly sought after quest that a large number of parents desire for their offspring. However with life, short as it is, intertwined and forever progressing with the meandering path of time, an education can easily be missed.</p><p>Learning in the traditional sense of primary, secondary and tertiary education; some certificates and a degree by the time you're 22 can be unhelpful to a surprising number of students.</p><p>Whether they're pressured into the wrong field by external factors (read: parents); study something they end up having nothing to do with after graduating (<a href="https://www.adammalone.net/about">see me</a>), or perhaps deciding later on in life that they wish they had stayed in school rather than leaving early. There seem to always be people whose education did not sufficiently prepare them for the path in life they ended up travelling.</p><p><strong><strong>Is there no recourse?</strong></strong></p><p>Actually there is. Some people decide to become mature students; indeed there were a couple of students partaking in my degree who had decided to do such a thing.</p><p>I however, have stumbled upon something almost entirely different, with one of the most appealing offers in the world; <em><em>it's all for free.</em></em></p><p>Enter <a href="https://www.edx.org/?ref=adammalone.net">edX</a>, a collaborative effort between some of the most well known and respected universities in North America to offer free education in a number of subjects to prospective students. Presently offering nine courses I have opted to take two of them; to fill in some of the gaps in the self taught nature of my prior education and to offer me knowledge I may use in future.</p><ul><li>CS50x offered by Harvard</li><li>6.00x offered by MIT</li></ul><p>It is my hope that I'll learn python to a reasonable and practical level as well as boost my web development skills to higher levels. Since they are free courses there is no financial incentive to study requiring me to motivate myself. However the passion I have for <em><em>all things tech </em></em>seems to be enough to get me through them in conjunction with work and other responsibilities for the time being.</p><p>Give their free education a go and learn something new! You never know where it might take you.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Sleep liability ]]></title>
        <description><![CDATA[ I was having a discussion recently about ridiculous things people do in their
sleep. We covered people who walk in their sleep, talk in their sleep, and even
those who behave as one would awake, but under the influence of a deep slumber.

The topic then came up about those ]]></description>
        <link>https://www.adammalone.net/sleep-liability/</link>
        <guid isPermaLink="false">5f338c2121b8f9692ae93539</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 06 Nov 2012 21:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/koala_sleeping.png" medium="image"/>
        <content:encoded><![CDATA[ <p>I was having a discussion recently about ridiculous things people do in their sleep. We covered people who walk in their sleep, talk in their sleep, and even those who behave as one would awake, but under the influence of a deep slumber.</p><p>The topic then came up about those who have 'sleep-struck' or in any way assaulted a partner lying next to them. This is where we reached a juncture.</p><p>To remain completely neutral from a storyteller and question asker perspective I won't divulge which of the two following opinions was my own.</p><p><strong><strong>The set up</strong></strong></p><p>The scenario is thus:</p><ul><li>Two related people lie in a bed.</li><li>One of the pair physically assaults the other whilst both are sleeping</li></ul><p>For this question we must now <a href="http://philosophy.hku.hk/think/arg/hidden.php?ref=adammalone.net">lay out our assumptions</a>:</p><ul><li>We must assume both participants are 100% assuredly sleeping.</li><li>Also we must assume both participants are of sound and clean mind. (No alcohol/drug influence and no prior undiscovered psychological issues)</li><li>Finally, we must assume that this was an act not caused by pre-existing <a href="http://www.medterms.com/script/main/art.asp?articlekey=13290&ref=adammalone.net">fasciculation</a>.</li></ul><p><strong><strong>The question</strong></strong></p><p>Because the couple were sleeping, does this excuse any assault that has taken place? Is this of comparable incidence to '<a href="https://en.wikipedia.org/wiki/Insanity_defense?ref=adammalone.net">pleading insanity</a>' or is the assault of one human by another itself grounds for legal repercussions?</p><p>One could argue that the assaulter had prior intent and although sleeping was not adverse to the occurrence. If anything, following the event they were glad since being asleep is as good an excuse as any.</p><p>Despite sleep being a state of semi-consciousness are we as humans in control of our bodies? Should it be left to the sleeper to account for their actions or is it someone else's responsibility to?</p><p>I suppose the true question would be <strong><strong>'Who is liable for the assault?'</strong></strong></p><p><strong><strong>Stop value judging</strong></strong></p><p>Does the scenario change in your mind if we alter some of the value judgments you, the reader, has already constructed about the two imaginary people in this scenario?</p><p>Remember your answer to the previous question and now answer honestly, does your answer change if:</p><ul><li>The victim was a man and the assaulter was a woman?</li><li>The victim died as a result of their injuries?</li><li>Either victim or assaulter was a minor?</li></ul><p>Quite often in law the punishment will attempt to reflect the severity of the crime. Compare <a href="https://legal-dictionary.thefreedictionary.com/battery?ref=adammalone.net">battery</a>, <a href="https://en.wikipedia.org/wiki/Grievous_bodily_harm?ref=adammalone.net">GBH</a> and <a href="https://legal-dictionary.thefreedictionary.com/first+degree+murder?ref=adammalone.net">'1st degree' murder</a>, all offer increasingly severe adjudication dependent, for the most part, on the crime being seen as more severe. However if the person committing the assault was not even aware they were undertaking such a task, does the severity of the resulting injury come into play at all?</p><p>Although not something you would think about every day I challenge you to honestly ask yourself firstly, if you were a judge, how would you rule? Now, sequentially, place yourself in the position of the victim and the assaulter. Ask yourself how you would like the judge to rule.</p><p>I know where I stand, tell me where you stand.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Death by taxes ]]></title>
        <description><![CDATA[ It&#39;s been a long couple of days but I can finally say: My taxes are done!

Since I moved to Australia in the last financial year, this has been the first
time I&#39;ve had to complete a tax return properly. Prior to this I was a ]]></description>
        <link>https://www.adammalone.net/death-taxes/</link>
        <guid isPermaLink="false">5f338b8921b8f9692ae9351f</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 31 Oct 2012 12:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/tax.png" medium="image"/>
        <content:encoded><![CDATA[ <p>It's been a long couple of days but I can finally say: <strong><strong>My taxes are done!</strong></strong></p><p>Since I moved to Australia in the last financial year, this has been the first time I've had to complete a tax return properly. Prior to this I was a student or working small casual jobs in England where the tax system is slightly different. I can't say it was a very enjoyable experience but as <a href="https://en.wikipedia.org/wiki/Death_%26_Taxes?ref=adammalone.net">the saying goes</a>:</p><blockquote>...but in this world nothing can be said to be certain, except death and taxes.</blockquote><p>Unfortunately I have neither the assets, nor the deviance to enable me to avoid either of the above so I'm restricted to doing my duty and paying up. If only coffee were a deductible and non-renewable asset that I could claim deductions for. Complete with a different formulae dependent on the strength, brand, cost and style of coffee drunk which will calculate the applicable offset. That is however, perhaps a task for another blog post.</p><p>One thing that I did struggle a little to comprehend whilst filling in forms was exactly how my semi-ethereal assets are defined. Working in the industry I do and taking that work out of the office during the evening and weekends means I'm dealing with data and <em><em>the cloud </em></em>on a deductible basis.</p><p><strong><strong>How do you define data?</strong></strong></p><p>The hardest part of the tax return however was putting a value and an asset class on some of the less physical items I purchased over the past year; <a href="https://www.ato.gov.au/content/77955.htm?ref=adammalone.net">specific to my industry</a>. It started to really make me think about things when I put domain names down. Sure, they're a form of asset but when you really think about it you're just paying someone to use letters in a specific order. I must be an enormous mug as I'm paying for more than one set of letters!</p><p>Data too, is hard to quantify. Since it's not really an asset that you own, more a consumable item for which a fee is paid to use said item every month; all very confusing. After a couple of quick skype calls to accountant friends and several hours <a href="https://www.ato.gov.au/content/downloads/ind00313557n19960612.pdf?ref=adammalone.net">trawling through various pages</a> on the ATO website I feel like I made an accurate, if conservative judgment.</p><p><strong><strong>What will happen with the rebate?</strong></strong></p><p>Since I filed with <a href="https://www.ato.gov.au/Errors/404.aspx?aspxerrorpath=%2Fpathway.aspx%3Fsid%3D42&ms=individuals&ref=adammalone.net">e-tax</a> I've been advised to <a href="https://www.ato.gov.au/content/1440.htm?ref=adammalone.net">expect a 12 day wait</a> (not too bad for bureaucratic  standards) before I can start splurging. Unfortunately since I am a mature adult with responsibilities and bills to pay I'll be splurging on sensible boring things. If you've got anything cool/unusual lined up for your rebate let me know in the comments!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Staggering cron to distribute maintenance tasks ]]></title>
        <description><![CDATA[ Whilst performance tuning a Drupal site recently, I read an article detailing
reasons why cron is bad; How Drupal&#39;s cron is killing you in your sleep
[http://www.metaltoad.com/blog/how-drupals-cron-killing-you-your-sleep-simple-cache-warmer]
. Running cron hourly was purportedly killing our site&#39;s cache which would have
the effect ]]></description>
        <link>https://www.adammalone.net/staggering-cron-distribute-maintenance-tasks/</link>
        <guid isPermaLink="false">5f338b1b21b8f9692ae934ff</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 30 Oct 2012 00:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/newrelic_before.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Whilst performance tuning a Drupal site recently, I read an article detailing reasons why cron is bad; <a href="http://www.metaltoad.com/blog/how-drupals-cron-killing-you-your-sleep-simple-cache-warmer?ref=adammalone.net">How Drupal's cron is killing you in your sleep</a>. Running cron hourly was purportedly killing our site's cache which would have the effect of slowing down browsing for users.</p><p><strong><strong>This is bad.</strong></strong></p><p>The page touts intelligent cron modules such as <a href="https://drupal.org/project/elysia_cron?ref=adammalone.net">elysia cron</a> or <a href="https://drupal.org/project/ultimate_cron?ref=adammalone.net">ultimate cron</a> as a solution to ensure that the tasks that need to run very often do so without simultaneously running the tasks that require more grunt, wipe the cache, cause high loads and <em><em>kill your site.</em></em></p><p>After whiteboarding all of the implementations of <a href="https://api.drupal.org/api/drupal/modules!system!system.api.php/function/hook_cron/7?ref=adammalone.net">hook_cron</a> (These functions run when cron does), I ordered them in order of importance and set about writing the cron script to place into the elysia cron settings. I also <a href="https://drupal.org/node/23714?ref=adammalone.net">wrote an external script</a> to call <a href="http://drupalcode.org/project/elysia_cron.git/blob/refs/heads/7.x-2.x:/cron.php?ref=adammalone.net">elysia's cron.php</a> every 5 minutes instead of using the automatic cron provided with drupal 7, as the minimum frequency of calling it was once per hour. To ensure we had an active site responsive to reader feedback I decided to run it a little more frequently than default.</p><p><strong><strong>Doesn't running more frequently have negative effects?</strong></strong></p><p>Elysia cron runs intelligently. For each implementation of hook_cron a threshold may be set. If cron fires every minute and the threshold for <a href="https://api.drupal.org/api/drupal/modules!system!system.module/function/system_cron/7?ref=adammalone.net">system_cron</a> is once a week then nothing will happen and no sites die!</p><p>The more important processes to run often are those which provide the user with action based feedback. If a user is directed to a post and sees it rise in the 'Popular Content' section of the site or notices hot topics posts fluctuating with new content posted regularly they will be more inclined to stay and return to this obviously active site.</p><p>Running cron this frequently on a normal site would have some slow down implications but elysia cron allows the administrative user to manage them. Taking slow background maintenance tasks and running them less frequently. It's not really too much of a problem to <a href="https://api.drupal.org/api/drupal/modules!dblog!dblog.module/function/dblog_cron/7?ref=adammalone.net">clear old log records</a> from watchdog once a day/week. Checking for <a href="https://api.drupal.org/api/drupal/modules!update!update.module/function/update_cron/7?ref=adammalone.net">module updates</a> too, can be left to a daily process.</p><p><strong><strong>So everything works now?</strong></strong></p><p>Once I had arranged the modules into groups to execute on the quarter of the hour, every six hours, every day and every week (depending on their importance) I ran into another issue. Every day/week there'll be a time, when all of the cron hooks run causing a momentary, but noticeable drop in performance.</p><p>This was brought to my attention by <a href="https://newrelic.com/?ref=adammalone.net">newrelic</a>, a performance management tool I've used a few times to ensure I'm not writing inefficient code and to track where Drupal is slowing down sites.</p><p>I could see these regular, albeit infrequent blips in the performance graph accompanied by a reduction in <a href="https://en.wikipedia.org/wiki/Apdex?ref=adammalone.net">apdex</a> and a report showing cron.php to be the biggest hog of memory, time and processing power. Seeing these regular peaks made my mind jump instantly back to college math. Rather than endure these peaks, could we somehow spread the load out more to get more frequent, yet smaller peaks? It was my idea that this would result in an overall lower average apdex and a more responsive site overall.</p><p><strong><strong>The short story is, it worked.</strong></strong></p><p>The longer story, with pictures saying a thousand words shows lower overall site loads. The peaks are indeed more frequent but at the same time is on average lower with average load times being under 400ms; good by any website's standard!</p><p>An extract of some of my elysia cron timings is included below and available in an attachment for download. I'd be interested to see if people think my settings are too lean and stringent or if I could cut back even further. I'm aware of how important cron is for a healthy Drupal site, but perhaps some implementations of hook_cron are more equal than others.</p><pre><code class="language-bash">aggregator_cron 10 3,9,15,21 * * *
apachesolr_cron 0 */6 * * *
captcha_cron 4 1 * * *
morning_news_cron 00 07-09 * * *
dblog_cron 50 4 1 * *
field_cron 40 0 * * 2
node_cron 4 2 * * *
poll_cron 4 3 * * *
radioactivity_cron */15 * * * *
redirect_cron 40 0 * *
rules_cron 30 * * * *
scheduler_cron 40 * * * *
system_cron 10 0 * * 6
update_cron 4 0 * * 0</code></pre> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Hallowe&#x27;en Planning ]]></title>
        <description><![CDATA[ It is my aim to host the best Hallowe&#39;en party that my friend group can attend this year. I&#39;m not even wanting the dubious accolade of &#39;best ever&#39; or even &#39;best of 2012&#39;, I know my limits.

I&#39;ve been to ]]></description>
        <link>https://www.adammalone.net/halloween-planning/</link>
        <guid isPermaLink="false">5f338ac521b8f9692ae934e7</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 25 Oct 2012 21:30:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/cauldron.png" medium="image"/>
        <content:encoded><![CDATA[ <p>It is my aim to host <strong><strong>​the best Hallowe'en party</strong></strong>​ that my friend group can attend this year. I'm not even wanting the dubious accolade of 'best ever' or even 'best of 2012', I know my limits.</p><p>I've been to a couple of events in the last couple of years. One that I organised, whilst I was Minister for Social Activities at the <a href="http://su-web2.nottingham.ac.uk/~wingchun/?ref=adammalone.net">University of Nottingham Wing Chun club</a> and a few that other people organised in various locations. I even went to one my college organised whilst I was still there; I'm fairly sure I went as 'Doc Brown', a lab coat and white hair are not at all hard to come by when one spent numerous hours in the chem lab.</p><p>Over that time I've learnt a little about what works and what doesn't. It's basically the same as any other social gathering insofar as the key to its success is in the name - <em><em>social.</em></em> Since these situations often bring people from differing social circles together it's likely that people will stay in those cliques for the duration of the party, unless forced otherwise.</p><p>Much in the same way that <a href="http://thesims.com/en_US/home?ref=adammalone.net">The Sims</a> have '<em><em><a href="https://www.wikihow.com/Have-a-Brilliant-Party-in-Sims-3?ref=adammalone.net">Killer Parties</a></em></em>', I'm going to have to buy 20 or so pizzas, light several grills, play some music and schmooze my guests with flattery and constant admiration!</p><p>As yet my costume isn't fully planned and I will take recommendations for what I'll wear as long as it's within safe for work standards. Off the menu are previous costumes of:</p><ul><li>Doc Brown</li><li>A jedi</li><li>Bond, James Bond</li><li>A ghost (yes it was just a sheet)</li><li>An attempt at Jason Voorhees</li></ul><p>An outdoor grill is being provided and food in great abundance is already preplanned. The cobwebs will go up along with spiders, candles, and pumpkins. The lights will be dimmed and <a href="http://bootiemashup.com/halloween/?ref=adammalone.net">Hallowe'en music</a> played, my way.</p><p>If I can make this party better I'd sure like to hear ways to host the <em><em>perfect Hallowe'en Party</em></em> in the comments.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Blogging for the over 40s ]]></title>
        <description><![CDATA[ When you are a person of a certain age, or so I&#39;ve heard, the internet is a strange and unknown beast.

For users of my generation and those of prior generations working in the IT sector, everything seems so natural and I feel at home in browser, command ]]></description>
        <link>https://www.adammalone.net/blogging-over-40s/</link>
        <guid isPermaLink="false">5f338a6c21b8f9692ae934cf</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 22 Oct 2012 21:30:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/over40view.png" medium="image"/>
        <content:encoded><![CDATA[ <p>When you are a person of a certain age, or so I've heard, the internet is a strange and unknown beast.</p><p>For users of my generation and those of prior generations working in the IT sector, everything seems so natural and I feel at home in browser, command line or app. Whereas for my parental units, a browser is called an internet, a command line is hacking and apps are kitchen appliances.</p><p>Let us not write off those of advanced years however, their wisdom gained from years of life experience deserves to be heard; at least give them the chance to broadcast their musings.</p><p>Following <a href="https://www.adammalone.net/post/drupal-subsites-made-simple">my blog post</a> detailing how to create subsites easily, I've been more inclined to spawn new sites on whims or whenever I <a href="https://idioms.thefreedictionary.com/at+the+drop+of+a+hat?ref=adammalone.net">drop my hat</a>. So when I was approached by parentals coveting my blog and desiring the opportunity to have their own platform I decided to set up a quick site for the over 40s; The Over 40 View.</p><p>The challenge for me in this project was to architect a site simple enough for my parents, their friends, colleagues and anyone not familiar with the internet to use. The site needed also to be easy to browse by those interested in what's written and offer a welcoming community aspect which would encourage like minded individuals to sign up and share their thoughts too.</p><p>I tried to keep the number of modules to a minimum which would help keep the site simple; perfect for the target audience.</p><p>A selection of modules I enabled are as follows:</p><ul><li>Blog - The simplest way for low maintenance multi-user blog sites;</li><li><a href="https://drupal.org/project/bueditor?ref=adammalone.net">BUEditor</a> - I like the simplicity of the WYSIWYG editor on drupal.org and simplicity is key here;</li><li>Color - Ideal for altering the appearance of the contributed <a href="https://drupal.org/project/sky?ref=adammalone.net">sky theme</a>;</li><li><a href="https://drupal.org/project/fivestar?ref=adammalone.net">Fivestar</a> - Allow site visitors some interaction and feedback with the content written;</li><li><a href="https://drupal.org/project/google_analytics?ref=adammalone.net">Google Analytics</a> - Important to grab some demographics about the site users;</li><li><a href="https://drupal.org/project/pathauto?ref=adammalone.net">Pathauto</a> - Friendly URLs won't scare people off so easily, even the word node can be foreign;</li></ul><p>I have opted not to enable comments on the site as moderation can be a bit of a time sink sometimes and I plan to be as hands off as possible. Similarly, I've not set up any database logging, I'd hope there'll be limited errors anyway! Currently the site is administrative invite only but I hope to pass some of that responsibility on to the <em><em>Over 40 View </em></em>community to manage their own authors at some point.</p><p>If you, or someone you know is over (or at least near) 40 and wants the chance to espouse their views, opinions, and just day to day thoughts either comment on this post, or contact the <em><em>Over 40 View </em></em>admin.</p><p>Happy blogging and remember: You're never too old to learn the internet.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Don&#x27;t mess with files ]]></title>
        <description><![CDATA[ One thing that&#39;s been causing me some issues recently is storing managed files
and the effects of changing various properties of them. Since the files have a
number of things recorded in the database; changing these properties can have
unwanted effects.

The file_managed table is where records ]]></description>
        <link>https://www.adammalone.net/dont-mess-files/</link>
        <guid isPermaLink="false">5f3389f621b8f9692ae934b6</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 17 Oct 2012 23:37:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/exception.png" medium="image"/>
        <content:encoded><![CDATA[ <p>One thing that's been causing me some issues recently is storing managed files and the effects of changing various properties of them. Since the files have a number of things recorded in the database; changing these properties can have unwanted effects.</p><p>The file_managed table is where records are stored and retrieved from. <a href="https://api.drupal.org/api/drupal/includes!file.inc/function/file_save/7?ref=adammalone.net">file_save</a> and <a href="https://api.drupal.org/api/drupal/includes!file.inc/function/file_load/7?ref=adammalone.net">file_load</a> being the operations to save/update and retrieve data about files from the database respectively.</p><p>When a file is loaded the $file object contains the <a href="https://api.drupal.org/api/drupal/includes!file.inc/group/file/7?ref=adammalone.net">following information</a>:</p><ul><li>fid - The unique identifier of the file (similar to a node ID or user ID);</li><li>uid - The user ID of the user who owns the file (usually the uploader);</li><li>filename - The name of the file without any paths;</li><li>uri - The path to where the file is stored in the filesystem;</li><li>filemime - The <a href="https://en.wikipedia.org/wiki/Internet_media_type?ref=adammalone.net">mimetype</a> of the file</li><li>filesize - The size of the file in bytes;</li><li>status - In the simplest terms 1 for permanent, 0 for temporary (removed at cron);</li><li>timestamp - A <a href="http://www.unixtimestamp.com/index.php?ref=adammalone.net">UNIX timestamp</a> signifying when the file was added.</li></ul><p>Whilst writing modules I've had to alter the uri, the filename and the filesize properties of the $file object. In doing so there were a few repercussions I didn't expect which I've had to resolve.</p><p><strong><strong>Why are you messing with the filesystem?</strong></strong></p><p>Ideally you shouldn't mess with files. Drupal does an excellent job of managing everything from updates, <a href="https://api.drupal.org/api/drupal/includes!file.inc/function/file_move/7?ref=adammalone.net">moves</a>, <a href="https://api.drupal.org/api/drupal/includes!file.inc/function/file_copy/7?ref=adammalone.net">copies</a>, <a href="https://api.drupal.org/api/drupal/includes!file.inc/function/file_delete/7?ref=adammalone.net">deletes</a> and even renaming should you have a duplicate.</p><p>But.</p><p>When using another system that Drupal does not control, nor see operations performed, it is useful to be able to keep any changes made updated within Drupal. Programmatically creating files or <a href="http://books.google.com.au/books?id=2EeS9Qs5wGgC&pg=PT378&lpg=PT378&dq=twitpic+stream+drupal&source=bl&ots=uYrwKgmASq&sig=Uq3VFVN2k-Ol8GNfvD_7RiExnqc&hl=en&sa=X&ei=IRKCUIb_Gub9iAes1oGgAQ&ved=0CDoQ6AEwAw">requesting and downloading files</a> (<a href="https://www.packtpub.com/drupal-7-module-development/book?ref=adammalone.net">Drupal 7 Module Development</a>) requires such knowledge.</p><p><strong><strong>Why bother telling Drupal</strong></strong></p><p>Altering the uri of the file will cause two major issues.</p><ul><li>The user will not be able to download the file through Drupal</li><li>Drupal will error out if the user attempts to save a file of the same name.</li></ul><p>By changing the uri of the file, the path referencing where that file is stored is also changed. When the user clicks on a link to download the file Drupal will look in the place the uri tells it to and return 404.</p><p>When Drupal is saving new files to the filesystem it checks to see if there is one there with the same name. If there is, the <a href="https://api.drupal.org/api/drupal/includes%21file.inc/constant/FILE_EXISTS_RENAME/7?ref=adammalone.net">standard behaviour</a> is to <a href="https://api.drupal.org/api/drupal/includes%21file.inc/function/file_create_filename/7?ref=adammalone.net">rename files</a> thus:</p><pre><code class="language-bash">my_uploaded_file.txt -&gt; my_uploaded_file_0.txt</code></pre><p>Picture this scenario:</p><ol><li>User saves the file foo.txt.</li><li>File is moved (manually) to bar.txt (but the uri in the database will remain foo.txt)</li><li>A new foo.txt is uploaded.</li><li>Drupal will run a <a href="https://php.net/file-exists?ref=adammalone.net">file_exists</a>($uri) returning false as foo.txt does not exist in the filesystem (since it was moved to bar.txt). This means it thinks it does not need to rename the file nor change the uri to match said change.</li><li>Drupal will attempt to insert a record in the file_managed table but as the uri is a <a href="https://www.w3schools.com/sql/sql_unique.asp?ref=adammalone.net">unique key</a> Drupal will not be able to insert foo.txt as the old record from the first file still exists.</li></ol><p>Changing the filesize of the file can cause downloads to end prematurely as the browser is provided with a <a href="https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html?ref=adammalone.net#sec14.13">Content-Length parameter</a> in the header. If it only expects 20kb it's not going to keep downloading after that!</p><p><strong><strong>What if I really want to alter files</strong></strong></p><p>If none of this has put you off then the best advice I can offer is:</p><blockquote>If in doubt, file_save.</blockquote><p>This will update (via <a href="https://api.drupal.org/api/drupal/includes%21common.inc/function/drupal_write_record/7?ref=adammalone.net">drupal_write_record</a>) the file_managed table and hopefully assuage any potential problems!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Lessons in customer relations ]]></title>
        <description><![CDATA[ Only recently have I noticed the dramatic effect a little bit of feedback from a
business can have on customers; myself as the customer and the businesses being 
The University of Nottingham [http://www.nottingham.ac.uk/] and Meriton
Serviced
Apartments [http://www.meritonapartments.com.au/].

The Backstory

As some ]]></description>
        <link>https://www.adammalone.net/lessons-customer-relations/</link>
        <guid isPermaLink="false">5f33899c21b8f9692ae9349f</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 12 Oct 2012 09:20:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/dat_comparison.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Only recently have I noticed the dramatic effect a little bit of feedback from a business can have on customers; myself as the customer and the businesses being <a href="http://www.nottingham.ac.uk/?ref=adammalone.net">The University of Nottingham</a> and <a href="http://www.meritonapartments.com.au/?ref=adammalone.net">Meriton Serviced Apartments</a>.</p><p><strong><strong>The Backstory</strong></strong></p><p>As some may know I spent a lot of time whilst studying in Nottingham involved in the <a href="http://nottswingchun.com/?ref=adammalone.net">University of Nottingham Wing Chun Club</a>. Whether it was participating, learning, teaching (or at least attempting to), marketing or developing the site, it took up my time. Clearly, with such a large portion of time and effort invested one could say I am a little attached to the club being the best it can be and looking the most appealing; at least in an online sense.</p><p>It was one of the first Drupal websites I ever developed (and I learned a lot from it). I made some absolutely terrible coding decisions that when I evaluated a couple of years later gave me a real shock. However, it is nice to see progression from my dark days of <a href="https://drupal.org/best-practices/do-not-hack-core?ref=adammalone.net">hacking core</a>, ramming random PHP files into the web root and storing additional data in an adjunct <a href="https://www.sqlite.org/?ref=adammalone.net">SQLite database</a> (and no, the <a href="https://drupal.org/node/18429?ref=adammalone.net">database details weren't added to settings.php</a>). I promise it's all changed now, really!</p><p>When I was first given ssh access to the student website server I had a little bit of a look around and remember being significantly underwhelmed by the lack of <em><em>anything remotely modern.</em></em> The antiquated hardware was the prime reason I started <em><em>actual</em></em> module development on Drupal 6 rather than Drupal 7 (even though all the books I'd been studying from used Drupal 7).</p><p>I recently logged back into the server to update some modules and decided to make a list of all the improvements that could be made and where it fell short. It was my intention to offer it as guidance for the University so the server could be improved benefitting all the clubs hosting on there.</p><ul><li>No git/SVN - my backups and version control consists of logging on when I remember and creating a tar archive of the web root and a mysqldump of the database.</li><li>No drush - Have you tried developing without drush recently.</li><li>PHP &lt; 5.2.5</li><li>No reason why PHP 5.3 can't be used but instead it's on 5.1.</li><li>Drupal 7 will not install on PHP &lt; 5.2.5</li><li>No uploadprogress support.</li><li>Apache configuration is non-negotiable.</li><li>The website is located at http://su-web2.nottingham.ac.uk/~wingchun/ - easy to remember right?</li><li>Clean URLs cannot be enabled. This is a semi-follow on of the previous bullet with the only requirement being mod_rewrite enabled.</li><li>Keep alive (a big favourite of mine) is disabled.</li><li>A quick look in the /home directory shows ~170 user directories, each with its own website.</li><li>The server has &lt; 1GB RAM. Vastly underequipped for the number of sites hosted.</li><li>One cannot download <strong><strong>to</strong></strong> the server. Any commands like <strong><strong>wget</strong></strong>, <strong><strong>curl </strong></strong>or services that make requests from external sites like the <a href="https://drupal.org/project/update_status?ref=adammalone.net">Update module</a> <strong><strong>DO NOT WORK.</strong></strong></li><li>Although the web roots are kept in user directories, logs are kept out of reach where users may not use them to debug.</li><li>I'll go out on a limb and guess the stack is likely unconfigured.</li><li>A noticeable amount of downtime makes it tempting to abandon the server, even if it is free.</li></ul><p>Unfortunately, all attempts to contact the University, by way of tweet, to their <a href="https://twitter.com/UniofNottingham?ref=adammalone.net">main account</a> and their <a href="https://twitter.com/UoNSU?ref=adammalone.net">student union account</a> (both of which are active) went unheeded; leading me to believe that the University simply does not care about providing quality infrastructure for student run groups. instead opting to ignore requests to open up a line of communication in the hope that nothing will come of it.</p><p>In the <em><em>information age</em></em> where contact can be made with the click of a mouse and information freely shared I see no reason why I was ignored. Surely, with reputations liable to get damaged by negative feedback the onus is on companies to listen where such feedback is offered.</p><p><strong><strong>A Contrast</strong></strong></p><p>Through 100% fault of my own whilst booking a brief stay in <a href="http://www.meritonapartments.com.au/sydney/kent-street/?ref=adammalone.net">Meriton on Kent Street</a>, I overlooked a coupon code for 20% off. I read an email from them offering the discount rate and just went ahead and booked, presumably assuming they would just exclaim:</p><blockquote>"Oh look it's Adam! He's clearly forgotten to apply his October coupon code. Let's just assign that for him."</blockquote><p>Alas this was not the case.</p><p>A brief call sorted everything out and I commended <a href="https://twitter.com/meritonsa?ref=adammalone.net">Meriton</a> on their service by way of tweet. <a href="https://twitter.com/adammalone/status/256307619512324097?ref=adammalone.net">Receiving a reply</a>, although not much effort on their part, appealed greatly to my very human nature of wanting to be heard. It's simple things like this that cause consumers to stick with particular brands and the development of brand loyalty <strong><strong>is </strong></strong>important to business.</p><p>I'm aware that prospective students to Nottingham will not think twice about whether or not they study at the University based on a blog post discussing servers. But regardless of the content, the attitude displayed speaks volumes.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Drupal and Node.js: Push your site ]]></title>
        <description><![CDATA[ Last night I presented Drupal and Node.js: Push your site for the DrupalACT October meet up. Overall I think it went well with the presentation generating a good amount of healthy discussion afterwards about topics covered in the slides and the benefits/drawbacks of Node.js

I&#39;ve ]]></description>
        <link>https://www.adammalone.net/drupal-and-node-js-push-your-site/</link>
        <guid isPermaLink="false">5f33891521b8f9692ae93488</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Wed, 03 Oct 2012 02:15:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/Non-blocking_0.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Last night I presented<a href="https://groups.drupal.org/node/258358?ref=adammalone.net"> Drupal and Node.js: Push your site</a> for the DrupalACT October meet up. Overall I think it went well with the presentation generating a good amount of healthy discussion afterwards about topics covered in the slides and the benefits/drawbacks of Node.js</p><p>I've decided to keep the test site up for the near future so people can check out some of the Node.js features I presented:</p><ul><li>Dynamic users online block</li><li>Node.js Comments</li><li>Online Node.js Chat</li></ul><p>I've attached a PDF copy of the presentation to this post (<a href="https://www.adammalone.net/sites/adammalone/files/drupal-and-nodejs.pdf">click here to download</a>) and comments regarding it are more than welcome below!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Drupal subsites made simple ]]></title>
        <description><![CDATA[ The short story

Rather than trawling through my digressions I&#39;ll place the steps for setting up
Drupal sub sites at the top of the post. I will demonstrate how I made
http://typhonius.com a sub site of http://adammalone.net

 * Point the DNS at the IP of ]]></description>
        <link>https://www.adammalone.net/drupal-subsites-made-simple/</link>
        <guid isPermaLink="false">5f33882021b8f9692ae93456</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 29 Sep 2012 01:50:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/total_1.png" medium="image"/>
        <content:encoded><![CDATA[ <p><strong><strong>The short story</strong></strong></p><p>Rather than trawling through my digressions I'll place the steps for setting up Drupal sub sites at the top of the post. I will demonstrate how I made http://typhonius.com a sub site of http://adammalone.net</p><ul><li>Point the DNS at the IP of the server.</li><li>Ensure virtual hosts point to the Drupal web root of your pre-existing Drupal site.</li></ul><pre><code class="language-apacheconf">NameVirtualHost *:80
&lt;VirtualHost *:80&gt;
  DocumentRoot /home/drupal/public_html
  ServerName adammalone.net
  ErrorLog logs/drupal-error_log
  CustomLog logs/drupal-access_log common
&lt;/VirtualHost&gt;
&lt;VirtualHost *:80&gt;
  DocumentRoot /home/drupal/public_html
  ServerName typhonius.com
  ErrorLog logs/typhonius-error_log
  CustomLog logs/typhonius-access_log common
&lt;/VirtualHost&gt;</code></pre><ul><li>Create databases for the sub sites.</li></ul><pre><code class="language-bash">CREATE DATABASE typhoniussite; GRANT ALL PRIVILEGES ON typhoniussite.* TO 'typhonius'@'localhost' IDENTIFIED BY 'password';</code></pre><ul><li>Create directories for the sub sites in the sites directory. (A useful tip is to make the directory names the same as the domain names)mkdir sites/typhonius.com</li><li>Copy sites/example.sites.php to sites/sites.php</li></ul><pre><code class="language-bash">cp sites/examples.sites.php sites/sites.php</code></pre><ul><li>Alter sites.php and add in the directory names and associated domain names. </li></ul><pre><code class="language-php">  $sites['typhonius.com'] = 'typhonius.com';</code></pre><ul><li>Copy sites/default/default.settings.php to sites/[site name]/settings.php</li></ul><pre><code class="language-bash">cp sites/default/default.settings.php sites/typhonius.com/settings.php</code></pre><ul><li>Navigate to newsubsitename/install.php and install your new subsite! </li></ul><p>Hopefully these steps are reproduceable enough for another person and if not submit a comment and tell me why not!</p><p><strong><strong>The long story</strong></strong></p><p>I acquired a couple of additional domains a short while ago and as yet haven't decided what to do with them. I'm not just hoarding domains so perhaps desist from linking me to the next meeting of Domain Campers Anonymous. Like many who've spent a portion of their life online I have a pseudonym since being called by my real name online is just so uncool. This was the name I signed up to <a href="https://drupal.org/user/1295980?ref=adammalone.net">Drupal.org with</a>.</p><p>Since typhonius.com was not in use I decided to nab it with the intention of making a simple one page site giving people a little bit of information about me with reference to that name. Until today http://typhonius.com simply pointed towards http://adammalone.net so anyone navigating there would end up on this site. I didn't feel a one page site deserved its own unique Drupal install and didn't feel much like making a web page from scratch so considered a Drupal sub site as the solution. This was twinned with the fact that although I've worked in Drupal installs which are sub sites ,I've not actually set one up before.</p><p><strong><strong>Enter the challenge</strong></strong></p><p>After looking online for a number of instructions in setting up sub sites from scratch I found <a href="http://mearra.com/blogs/sampo-turve/drupal-7-sites-php?ref=adammalone.net">nothing</a> <a href="https://drupal.stackexchange.com/questions/22822/best-way-to-create-subsites?ref=adammalone.net">of</a> <a href="http://diuf.unifr.ch/main/tech/node/9?ref=adammalone.net">immediate</a> <a href="https://drupal.stackexchange.com/questions/19171/for-drupal-7-sub-domain-installation-is-sites-php-the-same-as-6s-use-of-symboli?ref=adammalone.net">value</a>. A little bit of intuition and looking through <a href="https://api.drupal.org/api/drupal/includes%21bootstrap.inc/7?ref=adammalone.net">bootstrap.inc</a> I worked out the methods detailed above which appeared to succeed. The advantage of running sub sites is twofold. Not only does it keep the codebase small (vital for a small server like mine), but as each site runs from the same core and contributed modules they only need to be updated once.</p><p>The code enabling sub sites in <a href="https://api.drupal.org/api/drupal/includes%21bootstrap.inc/7?ref=adammalone.net">bootstrap.inc</a> is elegant to say the least and I recommend people check out the <a href="https://api.drupal.org/api/drupal/includes!bootstrap.inc/function/conf_path/7?ref=adammalone.net">conf_path</a> function. Checking the current URL and matching against configuration in the sites.php file will alter the configuration path hence which <a href="https://drupal.org/documentation/install/settin?ref=adammalone.net">settings.php</a> the request uses.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Necessity in Automation ]]></title>
        <description><![CDATA[ The more I&#39;ve learned about code/coding, the more I see inefficiencies in daily
life. Anything done repetitively and predictably is a sign to me to automate.

This applies to things from the manufacturing and services industries to content
management and data manipulation. To an extent this is ]]></description>
        <link>https://www.adammalone.net/necessity-automation/</link>
        <guid isPermaLink="false">5f3387b721b8f9692ae9343d</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 28 Sep 2012 08:30:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/code.png" medium="image"/>
        <content:encoded><![CDATA[ <p>The more I've learned about code/coding, the more I see inefficiencies in daily life. Anything done repetitively and predictably is a sign to me to automate.</p><p>This applies to things from the manufacturing and services industries to content management and data manipulation. To an extent this is already happening, yet quite often it comes to my attention that tasks are still done by hand.</p><p>I'm guilty of this too, but perhaps to a lesser extent. Maybe I could put <a href="https://www.debian-administration.org/articles/152?ref=adammalone.net">ssh keys</a> everywhere to eliminate the requirement to take time to enter my password but by the same token, perhaps I cannot be bothered.</p><p>Therein comes the other factor involved in automation; the initial investment, be it time or money, to set up. If it takes many hours to set up a system that saves a few seconds, this normally would be too much of a time investment to bother. However a system that saves hours and hours per month of work and only takes a little initial time investment is definitely a wise move.</p><p>This extends into the territory of short one liners that someone who works on the command line often will be familiar with. Using tools like <a href="http://www.grymoire.com/Unix/Sed.html?ref=adammalone.net">sed</a>, <a href="http://www.grymoire.com/Unix/Awk.html?ref=adammalone.net">awk</a>, <a href="http://www.grymoire.com/Unix/Grep.html?ref=adammalone.net">grep</a> and <a href="https://linux.die.net/man/1/rename?ref=adammalone.net">rename</a> combined with a working knowledge of vim (or your preferred text editor) can turn an undesirable slog into an effortless task. One particular example this would be when I made a <a href="https://drupal.org/node/225125?ref=adammalone.net">Drupal subtheme</a> and had to rename all the associated CSS files to the appropriate theme name.</p><pre><code class="language-bash">rename "s/base-theme-alpha(.*)/new-sub-theme-alpha\1/g" *.css</code></pre><p>Not too much time saved with this one liner but it prevents the necessity of altering the name of each file one after the other.</p><p>I was taught in high school physical education classes about the <a href="https://www.acronymfinder.com/Specificity,-Progression,-Overload,-Reversibility,-Tedium-(principles-of-training)-(SPORT).html?ref=adammalone.net">principle of SPORT</a> (Specificity, Progression, Overload, Reversibility, Tedium) and how it can be used to train better. This same acronym can be equipped in part to training and work in general. If an employee has to, almost ritualistically, complete a repetitive and mundane task they'll suffer tedium, get bored and distracted leading to a reduction in effectiveness and work rate.</p><p>Let scripts and code take care of the uninteresting, repetitive work. Free yourself and your employees from those shackles to be more productive thinking up things that AI is currently not advanced enough to think up. A certain amount of what I, and a lot of others in my industry do, is inventing. I create new modules, find new ways of solving problems and fix things to work more efficiently and save time. Learning that database records and updates were being entered manually and HTML was formatted in the same way every week was a <strong><strong>clear</strong></strong> sign to me that I could automate and the inspiration for writing this post.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ My social experiment ]]></title>
        <description><![CDATA[ I decided to run a little experiment on the main three social networks I have a
major presence on over the last couple of days. It involved a good piece of news
that I hoped would inspire a lot of people to comment; especially when others
also comment, pushing it ]]></description>
        <link>https://www.adammalone.net/my-social-experiement/</link>
        <guid isPermaLink="false">5f33875521b8f9692ae93423</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 25 Sep 2012 15:20:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/combined.png" medium="image"/>
        <content:encoded><![CDATA[ <p>I decided to run a little experiment on the main three social networks I have a major presence on over the last couple of days. It involved a good piece of news that I hoped would inspire a lot of people to comment; especially when others also comment, pushing it to the top of the 'hot' lists.</p><p>I received an email granting me the Australian visa I applied for in July 2012. This means all restrictions imposed on me by the previous visa (travel and work) were lifted and I have indefinite leave to remain in the country. Something I believe is referred to as temporary residency. It looks like I have around two years to wait for permanent residency but it's all just part of the process and I'm glad it's all proceeding.</p><p>After posting exactly the same message on all the social networks I tracked responses and compared them to the audience reached in an effort to determine where my most active 'friends' reside. A minor caveat being that some people overlap social networks so could have, but would not necessarily want to have, commented in more than one network. I'd like to do some further graphs showing which people share networks, perhaps a Venn diagram?</p><p>The following <a href="https://developers.google.com/chart/?ref=adammalone.net">google chart</a> is a nice way of viewing the data collected. Although facebook appears to be the leader in terms of percentage respondents, and indeed is (5.71% compared to 4.44% from G+), it should be noted that 66% of facebook respondents are also in my Google+ circles. Perhaps then, it is the case that friends with high sharing/posting tendencies do so on whichever social network they use most, see the post on first or perhaps because others have commented.</p><p>I was slightly surprised at the lack of responses from <a href="https://twitter.com/adammalone/status/250128279774834689?ref=adammalone.net">twitter</a> but I suppose twitter offers the least in terms of managing complex interpersonal relationships as Google+ and facebook do.</p><p>In summary, it appears my active contacts use both main networks with facebook leading, albeit only slightly.  The proportion of friends responding is similar enough though on both main networks inspiring confidence in me that there is hope yet for the alternative social network. Also I learnt twitter friends don't feel the desire to respond. Either that or they've made their feelings known elsewhere.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ The Perfect Restaurant ]]></title>
        <description><![CDATA[ It&#39;s rather a passion of mine to indulge in good food. Not regularly, but enough
to be able to savour delights from many of the nicer local establishments. I&#39;ve
also had the privilege of dining at a number of nice places in cities and
countries which ]]></description>
        <link>https://www.adammalone.net/perfect-restaurant/</link>
        <guid isPermaLink="false">5f3386f621b8f9692ae9340d</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 23 Sep 2012 15:08:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/restaurant.png" medium="image"/>
        <content:encoded><![CDATA[ <p>It's rather a passion of mine to indulge in good food. Not regularly, but enough to be able to savour delights from many of the nicer local establishments. I've also had the privilege of dining at a number of nice places in cities and countries which I believe bestows upon me, the right to envisage <em><em>The Perfect Restaurant</em></em>.</p><p>Now it must be remembered that a lot of restaurants are good, excellent in fact. However I've always felt a little let down by some minor intricacy where I'm left feeling wanting. It's almost not enough to just cook excellent food anymore.</p><p><strong><strong>Food &amp; Menu</strong></strong></p><p>Although <em><em>The Perfect Restaurant</em></em> shouldn't focus solely on food, that's not to say that the menu isn't important. I <em><em>hate</em></em> menus with more than a single page. It's hard enough deciding what colour shirt to wear in the morning sometimes so having the choice of four mains makes life that little bit easier. The other advantage of a smaller menu is that the chefs will be able to cook those four dishes exceptionally rather than being able to cook fifty dishes <em><em>adequately.</em></em></p><p>Having regular menu change though, wouldn't be undesirable. If it's going to be the place I want to regularly go, I don't want to keep having the same dish twice. Why stick with X when Y could be so much better?</p><p>A lot of the dining experience exists in the general ambience of the place; a theme is always good, especially when twinning the food and atmosphere. The music however, should not be too loud, nor too soft. Far too often I'll either only catch a waft of notes as I wander to my table - or - that'll be the only thing I hear for the duration of the meal. Ideally, each table would have directional speakers with an ability to control the volume for the leisure of those dining.</p><blockquote>"Too loud? Let's just turn the volume down!"</blockquote><p><strong><strong>The summon</strong></strong></p><p>Another thing that could be independent to each table is the ability to waiter-summon. I've seen a number of innovative methods of the waiter-summon. When I dined in <a href="http://www.fogodechao.com/index.php?id=167&ref=adammalone.net">Fogo de Chao</a> they used coloured tabs to denote whether you desired attention or not. If the tile was green side up, a waiter would attend your table promptly. If the tile was red side up, the table was bypassed. Curb your Enthusiasm <a href="https://www.quora.com/Curb-Your-Enthusiasm-TV-series/What-is-the-funniest-Curb-Your-Enthusiasm-scene?ref=adammalone.net">featured bells on tables</a> as a summoner, perhaps a little obtrusive but the general point still stands. Perhaps even having a small switch near each chair similar to those used on aircraft. This would bring even more granularity in terms of which diner requires service.</p><p>Having a signal is the optimum method of obtaining service in a busy (or quiet) restaurant. Having repeated asks of "Are you ready to order yet?", not being able to attract attention or simply being ignored would be eradicated in one fell swoop! A coup for the awkward/polite diner and an efficiency tool for the overburdened server.</p><p><strong><strong>I'm a huge fan of the round table.</strong></strong></p><p>Seating too, is more important than a lot of places understand. Not only the comfort of the seat (firm but soothing), but both the orientation towards other diners and geographic location.</p><p><a href="https://en.wikipedia.org/wiki/Round_Table?ref=adammalone.net">King Arthur had it right </a>when he selected the round table for Camelot. As one who enjoys entering into the foray of animated dinner conversation with friends or colleagues, the absolute worst place I can possibly be is in a corner. I have an effective audience of about three people. This causes table fracture and disjointed conversation requiring double the time to tell stories and forcing some listening to do so twice. The <em><em>round table</em></em> prevents people being forced into the corner and allows any diner the conversation spotlight should they so desire it.</p><p><strong><strong>Airlocks, on Earth</strong></strong></p><p>Sitting next to the door on a cold night is for sure a sustenance ruiner. The food may be fantastic but if the draught is chilling me every time another patron enters or exits it does kill the mood rather. This is possibly on the higher end of alterations a restaurant could make but I'll say it anyway: airlocks. An antechamber that nulls airflow from the outdoors and warms said air would create a stable atmosphere indoors not ruined by the passage of an angsty Antarctic southerly.</p><p>It would also remove doorslam from the restaurant equation; one of the leading causes of my discomfort.</p><p><strong><strong>The star rating</strong></strong></p><p>Finally, because I don't want this to go on forever, the bathrooms are one of the most important, yet simultaneously underrated areas of the restaurant. Some of the best restaurants I've eaten in have had shoddy bathrooms which unfortunately removes a notch or two from my internal rating system. I shall use one of <a href="http://www.gordonramsay.com/maze/?ref=adammalone.net">Gordon Ramsay's restaurants, maze</a>, as my benchmark for good bathroom practice.</p><ul><li>Dimly lit, candles and warm bulbs. The ambience of the restaurant continues.</li><li>Paper towels for hands, hand driers may be environmentally friendly but the noise generated is aurally unfriendly.</li><li>A selection of hand creams and lotions to make even the gruffest of men feel pretty.</li><li>Excellent decoration. The bathrooms should not be an afterthought, but integrated into the overall design. Wood panelling on the outside well my cubicle better have that same effect.</li></ul><p>It's a small internal game I play to rate the bathrooms of places. Points may be attained with scented candles and a man offering to spray cologne yet lost for cigarette burns on the porcelain. It's a dangerous game to go cheap on the smallest room.</p><p>I'd be interested to see if others share my opinions or have their own ideas on what would make <em><em>The Perfect Restaurant. </em></em>Comments are below!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Allowing anonymous comment deletion rights ]]></title>
        <description><![CDATA[ I have implemented a method of anonymous comment deletion on this site, based on
URL callback of a link displayed at comment post time.

As I explained in this forum post
[https://drupal.org/node/1482112#comment-6484222], the development of a module
that enables anonymous deletion arose from the desire ]]></description>
        <link>https://www.adammalone.net/allowing-anonymous-comment-deletion-rights/</link>
        <guid isPermaLink="false">5f33852621b8f9692ae933d0</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 17 Sep 2012 08:12:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/comment_0.png" medium="image"/>
        <content:encoded><![CDATA[ <p>I have implemented a method of anonymous comment deletion on this site, based on URL callback of a link displayed at comment post time.</p><p>As I explained in <a href="https://drupal.org/node/1482112?ref=adammalone.net#comment-6484222">this forum post</a>, the development of a module that enables anonymous deletion arose from the desire to instantly remove spam comments that bypass any spam filtering on the website. A lot of emails notifying me of comments being posted arrive in the email box on my mobile phone. Since this isn't continuously logged into my site, having to log in every time I want to delete spam is a pain. By having a callback, secured with a hash, that can delete individual comments without logging in I am able to delete any comments with ease.</p><pre><code class="language-php">/**
 * Implements hook_menu()
 */
function mymodule_menu() {
  $items['comment/%/fastdelete/%'] = array(
    'title' =&gt; 'Fast Comment Deletion',
    'page callback' =&gt; 'mymodule_comment_fastdelete',
    'page arguments' =&gt; array(1, 3),
    'access callback' =&gt; TRUE,
    'type' =&gt; MENU_CALLBACK,
  );
  
  return $items;
}

/**
 * Authentication function for menu callback determining if the
 * comment should be deleted
 */
function mymodule_comment_fastdelete($cid, $hash) {
  $comment = comment_load($cid);
  // First check to see if the comment actually exists
  if ($comment) {
    // Add in a timeout so the comment can be deleted only in the
    // first 24 hours after posting.
    $timeout = variable_get('user_password_reset_timeout', 86400);    
    $current = REQUEST_TIME;
    if ($current - $timeout &gt; $comment-&gt;created) {      
      drupal_set_message(t('You have tried to use a comment delete link that has expired. To have the comment deleted please contact the site administrator.'), 'warning');
      drupal_goto('contact-me');
    }
    else {
      // Load part of the user object of the node author for a secret string to send to user_pass_rehash
      $author = mymodule_node_author_pass_from_cid($cid);
      if ($hash == user_pass_rehash($cid, $comment-&gt;created, $author-&gt;pass) &amp;&amp; $current &gt;= $comment-&gt;created) {
        watchdog('mymodule', 'Comment Autodelete link used', array(), WATCHDOG_NOTICE);
        comment_delete($cid);
        drupal_set_message('Comment successfully deleted!');        
        drupal_goto('node/' . $comment-&gt;nid);
      }
      else {
        drupal_set_message('You have tried to use an invalid comment deletion link.', 'warning');
        drupal_goto('node/' . $comment-&gt;nid);
      }
    }
  }
  else {
    drupal_set_message('You have tried to use an invalid comment deletion link.', 'warning');
    drupal_goto('');
  }
}

/**
 * Generates the deletion link for a specific comment.
 */
function mymodule_comment_fastdelete_link($cid) {
  $comment = comment_load($cid);
  $author = mymodule_node_author_pass_from_cid($cid);
  // Combine a number of variables to construct a private hash that will be validated in order to delete the comment.
  return url("comment/$cid/fastdelete/" . user_pass_rehash($cid, $comment-&gt;created, $author-&gt;pass), array('absolute' =&gt; TRUE));
}

/**
 * Returns the hashed password of the node author the comment is posted on.
 * Used for an unknown part of the hash that an anonymous user could not guess
 */
function mymodule_node_author_pass_from_cid($cid) {
  $result = db_query('SELECT u.pass FROM {comment} c JOIN {node} n on n.nid = c.nid JOIN {users} u ON n.uid = u.uid WHERE c.cid = :cid', array(':cid' =&gt; $cid));
  return $result-&gt;fetchObject();
}

/**
 * Implements hook_token_info_alter()
 */
function mymodule_token_info_alter(&amp;$data) {
  $data['tokens']['comment']['comment_fastdelete_link'] = array(    
    'name' =&gt; t("Comment Delete Link"),
    'description' =&gt; t("A link to immediately delete a comment."),
  );
}

/**
 * Implements hook_tokens()
 *
 */
function mymodule_tokens($type, $tokens, array $data = array(), array $options = array()) {
  $replacements = array();
  if ($type == 'comment') {
    foreach ($tokens as $name =&gt; $original) {
    switch ($name) {
      case 'comment_fastdelete_link':
        $cid = $data['comment']-&gt;cid;
        $link = mymodule_comment_fastdelete_link($cid);
        if (isset($cid)) {
          $replacements[$original] = $link;
        }
        else {
          $replacements[$original] = '';
        }
        break;
      }
    }
  }
  return $replacements;
}</code></pre><p>I've allowed anonymous users the permission of deleting their own comments on this node as a proof of concept for people wishing to test out the functionality. I'm considering creating a separate module for this functionality and releasing it for other Drupal users. This is all dependent on responses to this post, if they're good I'll make a proper module out of it!</p><p>A couple of additional things I'd add into a module would be the ability for the administrative user to allow/disallow the functionality on certain nodes/content types. I'd also add in some kind of alteration if the comment has child comments beneath it. Perhaps instead of deleting the comment a better way to deal with it would be to replace the comment body text with [deleted] or similar.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Dealing with large filesizes in Drupal ]]></title>
        <description><![CDATA[ Storing large filesizes

I&#39;m currently writing a module [https://drupal.org/sandbox/typhonius/1668814] in
my spare time with the emphasis on providing the integration of Drupal and 
seedboxes [https://en.wikipedia.org/wiki/Seedbox]. More a proof of concept
module than anything else. In doing so, I& ]]></description>
        <link>https://www.adammalone.net/dealing-large-filesizes-drupal/</link>
        <guid isPermaLink="false">5f3383c321b8f9692ae9339a</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 11 Sep 2012 05:29:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/schema_alter.png" medium="image"/>
        <content:encoded><![CDATA[ <p><strong><strong>Storing large filesizes</strong></strong></p><p>I'm currently <a href="https://drupal.org/sandbox/typhonius/1668814?ref=adammalone.net">writing a module</a> in my spare time with the emphasis on providing the integration of Drupal and <a href="https://en.wikipedia.org/wiki/Seedbox?ref=adammalone.net">seedboxes</a>. More a proof of concept module than anything else. In doing so, I've ran into a couple of limitations of Drupal when it comes to storing files of large size.</p><p>The first limitation is storing filesizes greater than 2GB. The system module within its hook_schema declaration for the file_managed table casts the type of field to be that of 'INT':</p><pre><code class="language-php">'filesize' =&gt; array(  'description' =&gt; 'The size of the file in bytes.', 'type' =&gt; 'int', 'unsigned' =&gt; TRUE, 'not null' =&gt; TRUE, 'default' =&gt; 0, ),</code></pre><p>For general use this isn't too much of an issue, however since MySQL's signed INT fields have a <a href="https://dev.mysql.com/doc/refman/5.5/en/integer-types.html?ref=adammalone.net">maximum value of 2147483647 bytes</a>, trying to put any value larger will result in an error. This occurs whenever <a href="https://api.drupal.org/api/drupal/includes!file.inc/function/file_save/7?ref=adammalone.net">file_save</a> occurs with a <code>$file-&gt;<a href="http://www.php.net/filesize?ref=adammalone.net">filesize</a></code> larger than the threshold.</p><p>It's not too hard to change this but we must change it in the correct fashion so it does not get overridden in future and so the rest of the system is aware of what we've done.</p><p>Within the module I am writing I've added a few things to <a href="https://api.drupal.org/api/drupal/modules!system!system.api.php/function/hook_install/7?ref=adammalone.net">hook_install</a> and <a href="https://api.drupal.org/api/drupal/modules!system!system.api.php/function/hook_uninstall/7?ref=adammalone.net">hook_uninstall</a> as well as putting in a <a href="https://api.drupal.org/api/drupal/modules!system!system.api.php/function/hook_schema_alter/7?ref=adammalone.net">hook_schema_alter</a></p><pre><code class="language-php">/**
 * Implements hook_install()
 */
function seedbox_install() {
  db_change_field('file_managed', 'filesize', 'filesize', array('type' =&gt;   'int', 'size' =&gt; 'big',));
 }
 
 /**
  * Implements hook_uninstall()
  */
function seedbox_uninstall() {
  db_change_field('file_managed', 'filesize', 'filesize', array('type' =&gt; 'int', 'size' =&gt; 'normal',));
}

/**
 * Implements hook_schema_alter
 */
function seedbox_schema_alter(&amp;$schema) {
  if (isset($schema['file_managed'])) {
    $schema['file_managed']['fields']['filesize'] = array('type' =&gt; 'int', 'size' =&gt; 'big', );
  }
}
</code></pre><p>The first declaration has the effect of changing the structure of the filesize field from INT to BIGINT, hence raising the largest storable value to around 9.2EB which should be able to cater for files way into the future. We also must change the field back when we uninstall the module which is where hook_uninstall comes in.</p><p>The hook_schema_alter is simply a polite way of letting other modules know what we've done!</p><p><strong><strong>Downloading large files</strong></strong></p><p>Unless you're lucky enough to have gigabit speed internet to your home, or unless you're storing large files in Drupal on a LAN the connection speed is likely to be limited. In a private file system, any download is done so through PHP which has a maximum execution time. I was finding that when attempting to download large test files after a period of a few minutes the download would cancel with the rather confusing error of 'size mismatch'. A little bit more looking through logs and file.inc in general revealed the most likely cause was PHP execution time being exceeded.</p><p>I didn't want to change the execution time globally as this could have further repercussions on other sites on the same server or just on the operating of the site in general. The other option was to amend file_transfer and place a <a href="https://php.net/manual/en/function.set-time-limit.php?ref=adammalone.net">set_time_limit(0);</a> directly before the file was transferred to the user. Since hacking core is <a href="https://drupal.org/best-practices/do-not-hack-core?ref=adammalone.net">hacking core</a> I decided to find the relevant hook and place my declaration there.</p><p>The reason I limited to the streamwrapper I implemented was to limit the number of times PHP's execution limit was removed.</p><pre><code class="language-php">/**
 * Implements hook_file_download()
 * Due to the large size of some files it is necessary
 * to remove the restriction imposed by PHP on the length
 * of time it takes to execute this transaction.
 * Currently set to unlimited time, this could be altered
 * in an admin interface potentially
 */
 function seedbox_file_download($uri) {
   if (file_uri_scheme($uri) == 'seedboxdownload') { 
     drupal_set_time_limit(0);
   }
 }
 
 /**
 * Implements hook_file_download()
 * Due to the large size of some files it is necessary
 * to remove the restriction imposed by PHP on the length
 * of time it takes to execute this transaction.
 * Currently set to unlimited time, this could be altered
 * in an admin interface potentially
 */
 function seedbox_file_download($uri) {
   if (file_uri_scheme($uri) == 'seedboxdownload') {    
     drupal_set_time_limit(0);
   }
 }</code></pre><p>I generated files using the dd command, specifically:</p><pre><code class="language-bash">dd if=/dev/zero of=filename bs=1 count=1 seek=1048575</code></pre> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Another experience with Nginx, Drupal and CentOS ]]></title>
        <description><![CDATA[ I had a chance to try NGINX
[https://github.com/masterzen/nginx-upload-progress-module] on a CentOS 6 server
with the intention of running Drupal 7 on it a few weeks ago. It was a little
less easy than expected so I&#39;ve decided to run through a few of the ]]></description>
        <link>https://www.adammalone.net/another-experience-nginx-drupal-and-centos/</link>
        <guid isPermaLink="false">5f33829b21b8f9692ae93352</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 04 Sep 2012 22:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/centos_nginx_0.png" medium="image"/>
        <content:encoded><![CDATA[ <p>I had a chance to try <a href="https://github.com/masterzen/nginx-upload-progress-module?ref=adammalone.net">NGINX</a> on a CentOS 6 server with the intention of running Drupal 7 on it a few weeks ago. It was a little less easy than expected so I've decided to run through a few of the more tricky steps that I underwent in case I need to do it again, or indeed they are of help to others!</p><p>After running a </p><pre><code class="language-bash">yum install nginx</code></pre><p>I found that not all modules the <a href="https://github.com/masterzen/nginx-upload-progress-module?ref=adammalone.net">NGINX</a> Drupal configuration files required were present. After downloading the <a href="https://wiki.nginx.org/Install?ref=adammalone.net">latest </a><a href="https://github.com/masterzen/nginx-upload-progress-module?ref=adammalone.net">NGINX</a><a href="https://wiki.nginx.org/Install?ref=adammalone.net"> source</a> and recompiling a few times to add in dependencies I skipped prior to this I found the following compile options worked for me.</p><p>One caveat is that the <a href="https://github.com/masterzen/nginx-upload-progress-module?ref=adammalone.net">NGINX upload progress module</a> should be downloaded first and placed in the /root/ directory (for the following compile options as that's the location set for the --add-module option!)</p><p>If <a href="http://www.pcre.org/?ref=adammalone.net">PCRE</a> is not installed, it should be, to allow URL rewrites:</p><pre><code class="language-bash">yum install pcre-devel</code></pre><p>Similarly, the compile options will require openssl for SSL support:</p><pre><code class="language-bash">yum install openssl-devel

path=/usr/local/sbin --with-http_ssl_module --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-cc-opt="-I /usr/include/pcre" --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --conf-path=/etc/nginx/nginx.conf --with-file-aio --with-http_flv_module --with-http_mp4_module --with-ipv6</code></pre><p>After making and installing I moved the existing contents of /etc/nginx to a backup folder and cloned the Drupal settings from <a href="https://github.com/perusio/drupal-with-nginx?ref=adammalone.net">perusio's repository</a> into /etc/nginx</p><pre><code class="language-bash">git clone git://github.com/perusio/drupal-with-nginx.git</code></pre><p>I did make a few alterations to suit my system more appropriately but these may be omitted in other usecases:</p><ul><li>Renamed the user from www-data to NGINX</li><li>Allowed access to NGINX status to an external IP address (my own)</li><li>Changed ports for PHP CGI server (as I had another service operating on port 9000)</li></ul><p>Other than that, I created the myself a site config file within sites-available by copying the existing example.com.conf and inputting my own values. The most important values to check and/or change are:</p><ul><li>server name (The vhost NGINX should be listening for ie adammalone.net)</li><li>Log locations (debugging one large error log on a multi-site install is a major pain)</li><li>root (where the website is installed)</li></ul><p>The only caveat with the above is that NGINX will only allow access to index.php as standard. To allow access externally to cron.php and update.php then the line including additional configuration will need to be uncommented:</p><pre><code class="language-nginx">include sites-available/drupal_cron_update.conf;</code></pre><p>I use D<a href="https://drupal.org/project/drush?ref=adammalone.net">rush</a> for cron and update so this was not an issue in my install.</p><p>After all the time it took learning about NGINX configuration and after having used apache for years prior I decided to change back to apache for a number of reasons. The primary reason was the site experienced a few random lock ups. I'm unsure whether this was attributable to NGINX or PHP CGI, but having limited knowledge in the two I was unable to do much better than restart both services to clear the issue.</p><p>I did, however, find that NGINX config files were a little easier to understand at first glimpse over apache. Simple declarations and not a lot of them mean a site can be raised very easily without a great deal of worrying about</p><pre><code class="language-apacheconf">&lt;/VirtualHost&gt; without matching &lt;VirtualHost&gt; section</code></pre><p>or</p><pre><code class="language-bash">Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName</code></pre><p>NGINX also seemed a little snappier using default untuned configuration compared to default untuned apache configuration. This may just be due to the server I was using being more suited to those defaults but it sure did seem more responsive from an end user perspective.</p><p>My overall conclusions would probably be to try it out, as it may work nicely for differing uses however for the time being I'll stick with apache. <a href="https://en.wikipedia.org/wiki/Apache_HTTP_Server?ref=adammalone.net">It does serve most websites after all</a>!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Baader-Meinhof and Drupal ]]></title>
        <description><![CDATA[ It&#39;s a phenomenon that I experience a lot. Once you learn about it and learn the
word, you&#39;ll more likely than not experience a Baader-Meinhof phenomenon with
the actual name of the phenomenon itself!

I&#39;m not going to repeat too much what this site ]]></description>
        <link>https://www.adammalone.net/baader-meinhof-and-drupal/</link>
        <guid isPermaLink="false">5f3381ec21b8f9692ae93340</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 27 Aug 2012 06:04:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/deja_vu_cat.png" medium="image"/>
        <content:encoded><![CDATA[ <p>It's a phenomenon that I experience a lot. Once you learn about it and learn the word, you'll more likely than not experience a Baader-Meinhof phenomenon with the actual name of the phenomenon itself!</p><p>I'm not going to repeat too much what <a href="http://www.damninteresting.com/the-baader-meinhof-phenomenon/?ref=adammalone.net">this site says</a> about Baader-Meinhof but I will give my own experiences of it in relation to Drupal. I have to keep this blog vaguely relevant right?</p><p>Over the past few days I've been doing a little more work on <em><em>​<a href="https://drupal.org/sandbox/typhonius/1668814?ref=adammalone.net">another</a></em></em><a href="https://drupal.org/sandbox/typhonius/1668814?ref=adammalone.net">​ contrib module</a>. It's currently still in sandbox but when I, and some of the community, are satisfied with it I'll promote it to a full project. Writing README files isn't really my idea of a good time but I'm aware of how necessary it is. <a href="http://drupalcode.org/project/twitter_block.git/blob/5d51b489583b9deea3ebfc80e46bab8e70939a1e:/README?ref=adammalone.net">README</a> and <a href="http://drupalcode.org/project/twitter_block.git/blob/5d51b489583b9deea3ebfc80e46bab8e70939a1e:/INSTALL?ref=adammalone.net">INSTALL</a> files like those linked are some of the most downright unhelpful things to be included in modules when you can't accomplish what you set out to immediately.</p><p>Within that module I decided that <a href="https://drupal.org/project/rules?ref=adammalone.net">rules</a> integration would be an excellent idea. I gave <em><em>​<a href="https://ia700804.us.archive.org/6/items/TheTinydrupalBookOfRules/?ref=adammalone.net">The Tiny Book of Rules</a></em></em> a whirl and set out to write the custom events, conditions and actions that would give the module some much desired automation. It must be said that I've not really paid a lot of attention to rules before. Sure I'm aware that you essentially have to use it for anything Commerce related and I've heard tales of its uses but I've never really dabbled in it personally.</p><p>So here's the twist. Today whilst discussing methods to allow certain users instant access to auth-only material, rules seemed the perfect choice. I was able to explain how to implement something <strong><strong>and</strong></strong>​  I could even use complex words like 'action' and 'event'!</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/rules.jpg" class="kg-image" alt></figure><p>To the pre-Baader-Meinhof <em><em>​Adam Malone</em></em>​ that's a spooky coincidence! I only learned about it a few days ago and suddenly it's useful, however I must now accept the more mundane answer. There have probably been a hundred or so other times prior to this where rules could be used to solve some issues that were plaguing us. However, because I was unaware of its power I did not suggest it. Now that I know of rules, sure enough, a use case appears and it seems like magic that I only learned about it the other day.</p><blockquote>It's a sign!</blockquote><p>The article linked above really is worth a read and I can guarantee you'll start experiencing your own personal Baader-Meinhof phenomena soon.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ My goodness, my spamness ]]></title>
        <description><![CDATA[ For any individual or organisation that has a public facing website allowing
comments and user interaction spam is a continual problem. Taking a look at some
of my watchdog logs for this site over the past few days is probably quite
indicative of the sheer volume of spam that even ]]></description>
        <link>https://www.adammalone.net/my-goodness-my-spamness/</link>
        <guid isPermaLink="false">5f33818521b8f9692ae9332c</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 24 Aug 2012 08:15:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/spam.png" medium="image"/>
        <content:encoded><![CDATA[ <p>For any individual or organisation that has a public facing website allowing comments and user interaction spam is a continual problem. Taking a look at some of my watchdog logs for this site over the past few days is probably quite indicative of the sheer volume of spam that even a small site can generate.</p><p>Perhaps it's my fault for allowing anonymous comments but without allowing any form of reader feedback the flow of content is one way, not the greatest idea for the website of someone who advocates transparency and sharing of information. I don't want to make people register to comment either, as:</p><ul><li>People don't want to register to sites to comment and will more likely not comment than register.</li><li>The spam created will now be in user registration rather than comment and I'll have to filter through them instead.</li></ul><p><strong><strong>What options do I have for dealing with spam?</strong></strong></p><p>Luckily, the Drupal community being the way it is,</p><blockquote>There's a module for that!</blockquote><p>In fact there are several. They can all be vaguely categorised into one of three categories:</p><ul><li>Protection against bots</li><li>Prevention of posting advertisements/medication</li><li>Quality control</li></ul><p>To take a huge chunk of the spam filtering modules out in one hit the <a href="https://drupal.org/project/captcha?ref=adammalone.net">CAPTCHA module</a> has most covered. By including a field at the bottom of comment forms CAPTCHA includes a variety of techniques for deterring/rebuking spam posted to the site. From the very simple math captcha to a more complex pictorial captcha this really is one of the simplest ways to prevent spam.</p><p>CAPTCHA and assorted modules like <a href="https://drupal.org/project/field_hidden?ref=adammalone.net">Field Hidden</a> are best targeted against bots. Spambots are getting better at solving captchas yet they have a tendency to want to fill every field. If they fill a hidden field that a human would not normally be able to find; <strong><strong>caught!</strong></strong></p><p>I even wrote my own form of spam protection module, <a href="https://drupal.org/project/unique_comments?ref=adammalone.net">Unique Comments</a>. It is however, not an attempt at preventing Zopharin (or whatever drug it is) being advertised. Rather, an attempt to ensure that as the site matures, users take a requisite amount of care in the comments they add.</p><p>Similar to the concept spawned on the xkcd IRC and explained in the <a href="https://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/?ref=adammalone.net">subsequent blog post</a>, the module ensures that no two comments (either site-wide or on a node by node basis) are the same. As time goes on, the amount of possible comments diminishes until users have to be constructive and use more than one word!</p><p>Unique Comments therefore probably falls closer under a quality control banner than actual spam protection. This puts it in the same category as half of <a href="https://drupal.org/project/mollom?ref=adammalone.net">mollom</a>. The other half of mollom falling under actual spam prevention, it's a good service on the whole from my experience but has the downside of sometimes being <em><em>too efficient </em></em>in judging comments as spam.</p><p><strong><strong>​Further spam prevention</strong></strong></p><p>​An ever increasing number of spam comments is <a href="http://www.readwriteweb.com/archives/the_state_of_web_spam_human-posted_spam_is_on_the.php?ref=adammalone.net">written by humans</a>. This makes hidden fields and captchas ineffective since humans <strong><strong>are </strong></strong>​able to fill in the number of copy the letters. This then gives rise to <a href="http://joecorall.com/drupal-stop-spam-with-mollom-and-block-ip?ref=adammalone.net">more intensive methods</a> of blocking and banning users who contribute spam. By blocking a number of users in <a href="http://www.simplehelp.net/2009/04/06/how-to-block-an-ip-address-in-iptables-in-linux/?ref=adammalone.net">iptables</a> (around 8 IP addresses) I've stopped the majority of spam arriving at my site. This is possible for site administrators who are not so savvy by using drupal's inbuilt IP blocking mechanism and by adding the offending IP addresses to the block list (admin/config/people/ip-blocking).</p><p>All in all spam is an unfortunate thing we appear to have to deal with. Here's hoping for <a href="http://www.spamhaus.org/news/article/685/spam-botnets-the-fall-of-grum-and-the-rise-of-festi?ref=adammalone.net">more botnets fall</a> leaving more webspace for us legitimate users.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ DD isn&#x27;t necessarily bad ]]></title>
        <description><![CDATA[ For those responsible amongst you who do not have a non-drinker in their midst the perennial question before going out of an evening is, or should be, who is the designated driver. Only yesterday the NSW government released the &#39;Plan B&#39; anti-drink driving advertisements which are focused entirely ]]></description>
        <link>https://www.adammalone.net/dd-isnt-necessarily-bad/</link>
        <guid isPermaLink="false">5f3380e421b8f9692ae93304</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 19 Aug 2012 08:17:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/drinks.png" medium="image"/>
        <content:encoded><![CDATA[ <p>For those responsible amongst you who do not have a non-drinker in their midst the perennial question before going out of an evening is, or should be, who is the designated driver. Only yesterday the NSW government released the <a href="https://roadsafety.transport.nsw.gov.au/campaigns/plan-b/index.html?ref=adammalone.net">'Plan B' anti-drink driving advertisements</a> which are focused entirely on ensuring people engaging in drinking do then not get engaged in wrapping their car around a pole.</p><p>It was my turn to DD recently and thought it would be a fairly early night so wasn't too bothered. Opting to go to a fairly popular <a href="http://www.muddlebar.com/?ref=adammalone.net">cocktail bar</a> at around 10 and then home by before midnight. At least, that's what I presumed.</p><p>By closing time, around 1am, we were invited out further by colleagues of a friend I was with. Thus began the saga of me entering a club, Mooseheads, entirely sober.</p><p>It was a mess.</p><p>That being said, it's an experience I'd recommend people try out, if only for the sheer morbid curiosity.</p><p>Being sober isn't really what clubs were invented for. They were created for the purpose of getting people drunk, then presumably laid (depending on the prowess of either party and the level of inebriation). To the person under the influence clubs are wild, exciting, dance-inducing blurs of intensity. Unfortunately for me, on that night, it was a sticky floored, sub thumping, dazzle-eyed, kind of stinky area of massive over-stimulation.</p><p>I'm not talking about first years getting over stimulated either; that was me. There was so much going on I could hardly keep a track of it. From the guy who took his shirt off and bared his enormous belly to the world, to the guy who used his bottle as a phallic object spraying beer on people, to the guy who walked up to the girls next to where I was sitting to chat them up. I could hardly take it all in; and I loved it.</p><p>I have a greater respect than ever now for the door staff. Where there are nine incidences of guys play fighting there is one actual squaring up. Maybe they can smell it, but I was impressed at the speed and accuracy with which they were able to differentiate.</p><p>The point of my story is to try it out, deprive the clubs your payment for just one night and experience what it's like to club sober.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Name my blog ]]></title>
        <description><![CDATA[ It has just come to my attention that my blog/site doesn&#39;t really have a name.
Sure, I can rearrange the front page over and over but without a catchy title;
one with pizazz and zing I&#39;ll be limited to Z-league blogging for eternity.

So I ]]></description>
        <link>https://www.adammalone.net/name-my-blog/</link>
        <guid isPermaLink="false">5f33813821b8f9692ae9331c</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 12 Aug 2012 06:52:00 +0000</pubDate>
        <media:content url="" medium="image"/>
        <content:encoded><![CDATA[ <p>It has just come to my attention that my blog/site doesn't really have a name. Sure, I can rearrange the front page over and over but without a catchy title; one with pizazz and zing I'll be limited to Z-league blogging for eternity.</p><p>So I put it to you, humble anonymous readers, rename this website into something other than my name and allow me to flourish into at least B-league.</p><p>A few caveats, I'll be ignoring suggestions with titles including but not limited to:</p><ul><li>Anything that will put me in prison/legal hot water</li><li>Names advocating actions outside of/in contravention of the Geneva convention</li><li>Things you wouldn't say to your mother</li></ul><p>Play nice and rename me!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ My first attempt at art ]]></title>
        <description><![CDATA[ Contrary to popular belief I am quite a creative person.

However, this does not manifest itself in the same way as it does for 
conventional creativity. What I assume most people think, as this is what I
previously thought, was that to be creative you had to be fantastic at ]]></description>
        <link>https://www.adammalone.net/my-first-attempt-art/</link>
        <guid isPermaLink="false">5f33808d21b8f9692ae932f3</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 10 Aug 2012 02:57:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/not_my_art.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Contrary to popular belief I am quite a creative person.</p><p>However, this does not manifest itself in the same way as it does for <em><em>conventional</em></em> creativity. What I assume most people think, as this is what I previously thought, was that to be creative you had to be fantastic at art, writing, music or dramatics. Those who painted, sculpted and wrought masterpieces from paint, pen, clay and iron were the true creatives. Those musicians who trilled, acciaccatura'd and crescendo'd their performances with great flourishes were the true creatives. The dramtists who created imagery through the characters they portrayed and conveyed what could be taken as true and genuine emotion were the true creatives.</p><p>To be fair to myself, I have dabbled in the aforementioned arts somewhat. I played a musical instrument throughout the majority of my pre-university life, performed in a few of the projects the high school's drama club wrote (although I wasn't a drama student) and have in the course of my lifetime written to a number of blogs. We'll leave the art category out of this as I peaked at stick figures.</p><p>All this being said, I never got very far with any of those creative pursuits. Sure I managed to pass level 1 but when the max level cap for acting is level 99 I wasn't even close. On a scale of 0 to <a href="https://en.wikipedia.org/wiki/Gary_Oldman?ref=adammalone.net">Gary Oldman</a> I fall towards the lower end of the spectrum.</p><p>The more I've thought about what it takes to be creative, the more I consider the work I do to be a creative pursuit. Writing code and styling sites, making decisions on workflow and using drupal to carve out successful websites is to some extent a creative process. Draw inspiration from here, imagine up a few additional details here, gain advice from dreams over there and code up a storm from around there.</p><p>All that digression aside and coming back to the point of this post, I was recently directed to <a href="http://pokemonbattleroyale.tumblr.com/?ref=adammalone.net">this tumblr site</a>. To some it may just seem like a few pictures of cartoon <em><em>things</em></em>​ but to me it was 151 artists' impressions of my childhood. I <strong><strong>​breathed</strong></strong>​ Pokemon for a number of years, buying games, trading cards, having battles, watching the anime and generally talking about it with a number of friends at every opportunity.</p><ul><li>I remember when my Grandfather bought me a booster set of cards and inside, the fabled Charizard shiny.</li><li>I remember when I took that same shiny to school and someone ran off with it only to be chased down by my friend group and I who demanded they give it back.</li><li>I remember playing <a href="https://au.ign.com/games/pokemon-snap/n64-2335?ref=adammalone.net">Pokemon Snap</a> on my Pikachu edition N64.</li><li>I remember defeating Pokemon Red, Blue &amp; Yellow.</li></ul><p>The list of Pokemon related memories I have is almost beyond writing and those memories don't even require strain to remember. In all likelihood I could probably still recite, off the top of my head, at least 130 of the first 151 Pokemon.</p><p>So when I see some of this art, with knowledge of my own artistic limitations and now that I am no longer an impoverished student I feel it's acceptable to treat myself. Although I've been ridiculed by a couple of close friends for desiring some of these I felt vindicated when revealing my ambitions to other friends who had the same love as me when I was younger.</p><p>Although the exhibition has ended and the prints are no longer for sale I've since <a href="https://twitter.com/Small_Talk/status/232328629584003072?ref=adammalone.net">been in contact with one of the organisers</a> and followed her advice. This lead me to contacting <a href="http://janemai.co/?ref=adammalone.net">Jane Mai</a> and <a href="http://erikbkrenz.com/home.html?ref=adammalone.net">Erik Krenz</a> who created a couple of the pieces I liked the most. Wish me luck for my first foray into the art world, if all goes to plan I will have things to hang in my study that make me smile when I look up during my own creative escapades.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Wenatex: How I was invited to a free dinner ]]></title>
        <description><![CDATA[ UPDATE: My first hand experience of a Wenatex dinner/event.

I was at a friend&#39;s house a couple of nights ago when we were shown a letter that came through their door. It was an invitation to an obligation free dinner and seminar hosted by Wenatex for a ]]></description>
        <link>https://www.adammalone.net/wenatex-how-i-was-invited-free-dinner/</link>
        <guid isPermaLink="false">5f33801821b8f9692ae932db</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 05 Aug 2012 06:40:00 +0000</pubDate>
        <media:content url="https://www.adammalone.net/content/images/2020/08/wenatex_logo.png" medium="image"/>
        <content:encoded><![CDATA[ <p><strong><strong>UPDATE:</strong></strong> <a href="https://www.adammalone.net/post/wenatex-how-i-went-free-dinner">My first hand experience of a Wenatex dinner/event.</a></p><p>I was at a friend's house a couple of nights ago when we were shown a letter that came through their door. It was an invitation to an obligation free dinner and seminar hosted by Wenatex for a couple of weeks from today. The invitation also included 2 x $50 gift certificates to be redeemed on the night.</p><p>After reading through the extensive documentation that accompanied the letter I uncovered a couple of key pieces of knowledge.</p><ol><li>The lack of information about exactly who Wenatex were and what they do was so obvious they may have well stamped it on the envelope.</li><li>Basic details that seeming to check out were that the dinner was at the <a href="http://www.hotelkurrajong.com.au/HomePage.aspx?ref=adammalone.net">Hotel Kurrajong</a> in Canberra and that it was most definitely free and without obligation.</li><li>Something vague about sleep and sleep products.</li></ol><p>As anybody who has heard of a <a href="http://www.419eater.com/html/419faq.htm?ref=adammalone.net">419 scam</a> or indeed has ever received <em><em>any</em></em>​ form of marketing ever this instantly screamed out to me <strong><strong>​TOO GOOD TO BE TRUE.</strong></strong></p><p>Thinking that I'm extremely tech savvy (<a href="https://www.adammalone.net/">Come on I own my own domain!</a>) and that I am a child of the Internet generation I decided to delve into the murky world of information gathering with a quick Google search.</p><p>After finding <a href="http://www.wenatex.com.au/?ref=adammalone.net" rel="nofollow">their amateurish website</a> and taking a good seven minutes clicking links I found next to no information about their products and absolutely no information about their prices. Personally, if I wanted to sell my wares, which are highly sought after, I would at least advertise them slightly better on my site than Wenatex.</p><p>The website was almost no better at giving me information than the aforementioned documentation accompanying the letter so I decided to look at 3rd party blogs and forums to see what others in my position had to say.</p><p>After reading posts from <a href="http://www.aussiestockforums.com/forums/showthread.php?t=10433&ref=adammalone.net">here</a>, <a href="http://www.consumer.org.nz/news/view/wenatex-healthy-sleep-presentations?ref=adammalone.net">here</a>, <a href="https://forums.whirlpool.net.au/archive/1915638?ref=adammalone.net">here</a> and <a href="http://www.consumer.org.nz/news/view/wenatex-healthy-sleep-presentations?ref=adammalone.net">here</a> the rest of the picture was fleshed out and I gained a more full understanding of who Wenatex are, what they do and what these seminars are,</p><p>By ensnaring people with a free meal at a nice establishment it Wenatex employees <em><em>​allegedly</em></em>​ then enter into an exercise in hard selling. With beds and bed products<em><em> </em></em>costing thousands of dollars attendees are <em>allegedly</em>​ pressured into buying the products with tactics such as "We are offering this &lt;reduced but still damned expensive&gt; price for one night only!" as well as a lot of implausible information about how sleeping on a bed of dried herbs or using a goat milk mattress benefits sleep. I've read they also show magnified images of bed bugs which I'm lead to believe Wenatex beds are immune to.</p><p>The psychological effect of these tactics leads people to believe that Wenatex beds are better for their general health and wellbeing. Take also into account that they're reduced for one night only incites the sale.</p><p>Since the meal is free and they state multiple times that it's all no obligation I feel there's no harm in taking advantage of their generosity and acquiring a free meal in the process. I'd like to believe I'm resilient enough to resist the hard sell; even if I'm not I doubt my bank account would acquiesce quite as easily.</p><p>My advice to anyone else who has received an invitation to a Wenatex event offering free meals, gifts and the like is to go to dinner but treat everything they say with a healthy degree of skepticism. Don't be taken in by any marketing ploy and trust your instincts when they unreservedly tell you herbs belong on food, not in blankets.</p><p>In summary:</p><ul><li>Acquire free meal</li><li>Acquire free gift</li><li>Ignore hard sell</li><li>Leave happy</li></ul> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Pick 3 ]]></title>
        <description><![CDATA[ I may not be the busiest person in the world, indeed I&#39;m often looking to be
busier; things to keep me occupied are good! At any one time.

I&#39;ll have a few side projects on the go at any one time which makes
accomplishing other things ]]></description>
        <link>https://www.adammalone.net/pick-3/</link>
        <guid isPermaLink="false">5f3377d821b8f9692ae932bd</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 29 Jul 2012 11:16:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/venn_diagrams.png" medium="image"/>
        <content:encoded><![CDATA[ <p>I may not be the busiest person in the world, indeed I'm often looking to be busier; things to keep me occupied are good! At any one time.</p><p>I'll have a few side projects on the go at any one time which makes accomplishing other things difficult. Each of them competes for my time in ways that, alas, cannot fulfill another. In a way they are the four base categories I can fit any part of my life into if asked at any point.</p><p>For example, this blog post would be considered a side project as it's something I like to do but is non-essential to daily living. It has the benefit of allowing me to vent, as well as giving me some writing experience and <a href="http://www.seomoz.org/blog/google-fresh-factor?ref=adammalone.net">improving the SEO rankings of my website</a>, hence my <em><em>brand</em></em>.</p><p>Other side projects would include <a href="https://drupal.org/user/1295980?ref=adammalone.net">developing drupal modules</a> and general personal research and development. My justification being that I'll need to stay at least towards the head of the curve so I can be ready for the next best thing.</p><p>I also have to give an amount of attention to <a href="https://en.wikipedia.org/wiki/Lord_Voldemort?ref=adammalone.net">she who must not be named</a> and to work.</p><p>Once all of these are taken into account I am able to put time into the final category: sleep. This doesn't mean that I will receive no sleep, it does however mean that my sleep time will not be the full amount I need to <em><em>​feel refreshed</em></em>.</p><p>It has become quite obvious that out of these four categories, I really, truly only have time to accomplish three. This is unfortunate as more hours in the day or some kind of time <a href="http://harrypotter.wikia.com/wiki/Time-Turner?ref=adammalone.net">dilation device</a> would really help. However, these are currently impossible, so I guess I'm sleeping when I'm dead.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Hotel no-fi ]]></title>
        <description><![CDATA[ Since I&#39;m in Sydney for the weekend, it was sort of necessary to find a place to
stay. I&#39;ve slept in the car before and really that&#39;s something I&#39;d like to
reserve for special occasions.

Since I like to think of myself ]]></description>
        <link>https://www.adammalone.net/hotel-no-fi/</link>
        <guid isPermaLink="false">5f33777621b8f9692ae932a6</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 22 Jul 2012 02:21:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/wifi-logo.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Since I'm in Sydney for the weekend, it was sort of necessary to find a place to stay. I've slept in the car before and really that's something I'd like to reserve for special occasions.</p><p>Since I like to think of myself as a man of the internet having a good connection for every waking moment. It makes me happy to see places embrace the online-always culture and offer free wifi to their clients. Companies like <a href="http://www.queenslandrail.com.au/railservices/city/pages/wifi.aspx?ref=adammalone.net">QLD Rail</a>, <a href="https://en.m.wikipedia.org/wiki/Gogo_Inflight_Internet?ref=adammalone.net#section_2">Delta</a> and even the <a href="http://www.guardian.co.uk/uk/2012/jun/01/london-tube-stations-wi-fi?mobile-redirect=false&ref=adammalone.net">London Underground</a> have, or are soon rolling the service out.</p><p>What annoys me is how narrow minded a lot of hotel chains still are when it comes to providing online access to guests. The mentality is that online access is a luxury that guests must pay top dollar for low quality service. I could understand a little more if this were the early 90s where it would have been expensive for the hotel to be online and relevant technologies were sufficiently new to demand paying a premium. However, in 2012, is it too much to ask for free, high speed internet whilst I stay in hotels?</p><p>Having been a business customer a few times, and hopefully more in future, internet access is necessary for pre-meeting preparation, general worldly awareness and even just to stay in contact with people. As a pleasure user, heck I just want to read some blogs and browse Reddit!</p><p>Since the advent of mobile phone wifi modems it's all too easy to simply set my phone to broadcast and connect a laptop and tablet to browse late into the night, as I did last night!</p><p>I was able to stream video from vimeo, read a few articles using pulse/Google reader, post <a href="/post/ag-after-google">last night's blog article</a> and yes I could even fit in one round of Draw Something.</p><p>With all this in mind you might ask why I'm complaining about hotel wifi when it seems I am adequately prepared to deal with no access by simply tethering.</p><p>There are two main reasons:</p><ul><li>Sometimes you just don't got good signal</li><li>Stop being technology dinosaurs, get with the times and realise that offering it could make the difference between guests staying at your hotel or not. More and more so into the future. If you do not adapt, you will fail.</li></ul> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ AG: After Google ]]></title>
        <description><![CDATA[ I was having a conversation with a friend the other day about what would happen
when/if Google becomes obsolete.

At present it seems almost impossible to imagine the end of the search giant. It
being present in my life to the extent that I [https://www.google.com.au/ ]]></description>
        <link>https://www.adammalone.net/ag-after-google/</link>
        <guid isPermaLink="false">5f3369cbd3ed9b6facf506c0</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 21 Jul 2012 14:00:00 +0000</pubDate>
        <media:content url="https://images.unsplash.com/photo-1529612700005-e35377bf1415?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;MnwxMTc3M3wwfDF8c2VhcmNofDF8fGdvb2dsZXxlbnwwfHx8fDE2NzEzMjA5OTc&amp;ixlib&#x3D;rb-4.0.3&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>
        <content:encoded><![CDATA[ <p>I was having a conversation with a friend the other day about what would happen when/if Google becomes obsolete.</p><p>At present it seems almost impossible to imagine the end of the search giant. It being present in my life to the extent that <a href="https://www.google.com.au/?ref=adammalone.net">I</a> <a href="https://mail.google.com/?ref=adammalone.net">use</a> <a href="https://maps.google.com.au/?ref=adammalone.net">it</a> <a href="https://plus.google.com/?ref=adammalone.net">every</a> <a href="https://play.google.com/?ref=adammalone.net">day</a>, to say the least. My workplace (and my own domains) run on google apps, and after the pain of running my own mailserver, having google take over lifted the load considerably. On Monday I'll receive my new <a href="https://www.google.com/nexus/?ref=adammalone.net">Nexus 7</a> which will accompany the Galaxy S2 and Asus TF300 I have. All of which run android and utilise the play store, provided by google.</p><p>The thing a lot of people seem to forget is that google is, at its heart, an advertisement company; not unlike facebook. The difference that leads to this omission in peoples perceptions what I like to refer to as the 'iPod Effect'.</p><p>When I was younger, in high school, and worked retail on Saturday mornings at a local consumer electronics shop I would get the same question over and again.</p><blockquote>Would you recommend I get an mp3 player or an iPod?</blockquote><p>The way Apple's marketing team, or Steve Jobs himself, or a combination of all had advertised the iPod made it seem like an alternative to the mp3 player; whereas in fact, they are one and the same. Or, more accurately put, an iPod is just a subset of the generic mp3 player category.</p><p>This same effect is still prevalent and only a couple of days ago my, quite obviously Asus, tablet was repeatedly referred to as 'iPad'. Almost as if one can purchase either a tablet OR an iPad.</p><p>Bringing me in a wide circle back to why Google has 'Apple Effect'. Perhaps it is to do with their <a href="https://en.wikipedia.org/wiki/Don't_be_evil?ref=adammalone.net">motto</a> or just how it they are perceived as a generally philanthropic company by providing all these amazing free products for users.</p><p>Either way, the perception that they are not out to do harm has, in part, made them highly successful and has acquired them a <strong><strong>HUGE</strong></strong> number of service users who in turn have developed the kind of reliance on them I admit to having seemingly making them <a href="https://en.wikipedia.org/wiki/Too_big_to_fail?ref=adammalone.net">too essential to fail</a>!</p><p>However, as is often the case, things must run their course. Like the tide coming in and then retreating I find it hard to believe a company will last ad infinitum. I made it clear to my friend that I doubted any 'ungooglication' any time soon, however when they would inevitably be replaced as they , in the beginning, replaced others (Yahoo, Alta Vista, Ask Jeeves) the replacement would inevitably be an improvement as was the case when google initially took over.</p><p>In summary, After Google (whenever that is) won't be the end of days it would seem if that day were suddenly tomorrow. Like the slow, yet constant <a href="https://www.w3schools.com/browsers/browsers_stats.asp?ref=adammalone.net">take-up of alternates to IE</a>, it'll be a trickle rather than a torrent and more likely than not we'll not even realise it's happening, until we're all using the AG Alternative.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ My initiation to clean URLs code ]]></title>
        <description><![CDATA[ The setup

I have been working on a firewalled (remember this for later) development server
over the last couple of weeks at the AMA [http://ama.com.au/]. With entries into
host files, .htaccess files and firewall rules to prohibit those not developing
the updated site from accessing it we ]]></description>
        <link>https://www.adammalone.net/my-initiation-clean-urls-code/</link>
        <guid isPermaLink="false">5f3362164017e06a02f91687</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 14 Jul 2012 00:27:00 +0000</pubDate>
        <media:content url="" medium="image"/>
        <content:encoded><![CDATA[ <p><strong><strong>The setup</strong></strong></p><p>I have been working on a firewalled (remember this for later) development server over the last couple of weeks at the <a href="http://ama.com.au/?ref=adammalone.net">AMA</a>. With entries into host files, .htaccess files and firewall rules to prohibit those not developing the updated site from accessing it we considered it pretty locked down.</p><p>Development was going well with custom and contrib modules almost finished and the theme pulling together well. Drupal requires a few extra configuration options be set such as user permissions, metatags, labels on fields and things like sitename/slogan. One of the final things on the checklist was to turn <a href="https://drupal.org/getting-started/clean-urls?ref=adammalone.net">clean URLs</a> on. After navigating to the clean URLs page I ran the check and it failed;</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/WKNlp.png" class="kg-image" alt></figure><p><strong><strong>Detective Work</strong></strong></p><p>ok this has happened a few times before, let's just revisit the handbook and go over the steps.</p><ul><li>Ensure mod_rewrite is on.</li><li>The server is debian so a quick apachectl -M showed rewrite_module (shared) so that was fine.</li><li>Ensure the httpd.conf file allows the Drupal .htaccess file to overwrite it?</li></ul><pre><code class="language-apacheconf">&lt;Directory /home/amalone/web/ama/&gt;
AllowOverride All
Order allow,deny
allow from all
&lt;/Directory&gt;</code></pre><ul><li>Looks good to me.</li><li>Check the .htaccess is even being read by putting some junk text in there and see if it breaks the site.</li><li>Error 500. Check that off the list.</li></ul><p>At this point I decided to give up for a while and assume some weird magic would allow me to enable clean URLs at some point in future since I was at a loss.</p><p>Eventually I got back round to it and did some more investigation. Looking at the code in the <a href="https://api.drupal.org/api/drupal/modules!system!system.admin.inc/7?ref=adammalone.net">system module (system.admin.inc)</a> I could see that when the user wanted to check if their site was ready for clean URLs drupal does a <a href="https://api.drupal.org/api/drupal/includes!common.inc/function/drupal_http_request/7?ref=adammalone.net">drupal_http_request</a> to itself at an address of http://example.com/admin/config/search/clean-urls/check which would return http code 200 (OK) if clean URLs can be enabled and would fail if it could not be enabled.</p><p>Thinking on my feet I immediately requested that URL in my browser and received the following json output:</p><pre><code class="language-json">{"status":true}</code></pre><p>So where is the issue?</p><p><strong><strong>​The Solution</strong></strong></p><p>Remember at the very top of this post when I said it was a firewalled server due to it being under development? Turns out the server could not access itself from outside, so a relative link as a menu callback would have worked but the absolute link which involved Drupal calling itself from external to the server was blocked by the firewall.</p><p>Obviously I could have altered the core code or opened up the firewall so clean URLs worked but that's not advisable for a <a href="https://drupal.org/best-practices/do-not-hack-core?ref=adammalone.net">number of reasons</a>.</p><p>I decided the best thing to get clean URLs working without altering anything on the server was to change the variable in the database with the following:</p><pre><code class="language-sql">update variable set value = 'i:1;' where name = 'clean_url';</code></pre><p>URLs were clean, the site wasn't broken, one step closer to site release.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ How I almost got my name .com ]]></title>
        <description><![CDATA[ I was curious a while ago as to who owned the .com of my name as the domain was
parked and I wanted to see if I could make an offer on it seeing as whoever
owned it wasn&#39;t using it. So I did a whois of http: ]]></description>
        <link>https://www.adammalone.net/how-i-almost-got-my-name-com/</link>
        <guid isPermaLink="false">5f3361334017e06a02f9166e</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Thu, 05 Jul 2012 01:20:00 +0000</pubDate>
        <media:content url="" medium="image"/>
        <content:encoded><![CDATA[ <p>I was curious a while ago as to who owned the .com of my name as the domain was parked and I wanted to see if I could make an offer on it seeing as whoever owned it wasn't using it. So I did a whois of <a href="http://adammalone.com/?ref=adammalone.net" rel="nofollow">http://adammalone.com</a> and saw who it was registered with as well as the administrative contact. Feeling all proper I decided to go through the correct channels and emailed the given address with what I'd like to call an <em><em>email of investigation</em></em>. Within this email I said:</p><blockquote>Good Morning,<br><br>I have noticed through a whois query that you are the administrative contact for the adammalone.com domain. I've also noticed that the domain is not currently being used. I am just emailing to query the possibility of negotiating a domain transfer at some point in the future.<br><br>Many Thanks,<br>Adam Malone</blockquote><p>It should be noted that the email also contained my Australian telephone numbers and my .net email address as a standard footer.</p><p>I received a prompt reply asking whether I'd like to point the domain at <a href="http://malonelaw.com/?ref=adammalone.net" rel="nofollow">http://malonelaw.com</a> or some other site. What had occurred instantly became obvious to me as I remembered an incident that occurred a few years ago when I first got onto the internet and decided to google my own name. It revealed that somewhere in the world was a lawyer namesake. Being a mischievous scamp I decided to email the guy and pretty much say "We have the same name". To which he responded with an email along the lines of "You must be a pretty cool guy then."</p><p>Clearly he owned the .com address and the administrative contact thought I was him (as we share the same name and thus by definition are pretty cool guys). After thinking for all of 13 seconds about whether I should continue the charade and have him point it at the IP of <em><em>my</em></em> server, I considered it a bad move to attempt to take, by subterfuge, a lawyer's property.</p><p>I let the contact know the small error to which he seemed thankful and went on my way. A couple of days later, the .com domain was pointing towards the law site.</p><p>Thus ends an uneventful story where I could have had the pleasure of my name's .com but didn't because nobody really wants to get into fisticuffs with a lawyer; <a href="http://www.popehat.com/tag/oatmeal-v-funnyjunk/?ref=adammalone.net">wherever the balance of truth and law are</a>.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ As usual ]]></title>
        <description><![CDATA[ The first post is always trashy, nobody likes it and you instantly with you
could just skip the whole &#39;first post&#39; thing and just jump straight in as if
you&#39;d never left off.
I blogged for a while on blogger, until I decided I had lost ]]></description>
        <link>https://www.adammalone.net/usual/</link>
        <guid isPermaLink="false">5f33604c4017e06a02f9165b</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Fri, 25 May 2012 01:01:00 +0000</pubDate>
        <media:content url="" medium="image"/>
        <content:encoded><![CDATA[ <p>The first post is always trashy, nobody likes it and you instantly with you could just skip the whole 'first post' thing and just jump straight in as if you'd never left off.<br>I blogged for a while on blogger, until I decided I had lost all ideas for making up stories about myself that 3 other people in the world would read, so I stopped. Right now, this site acts as a place for me to actively develop using <a href="https://drupal.org/?ref=adammalone.net">Drupal</a> where I've been trying to innovate and learn. This site is also a platform on which I can promote myself and my work and act, in the future, as a showcase for the things I have accomplished.<br>I can neither confirm nor deny the long term future of this blog, but I'd rather not kill it before I've had a chance to decide whether I want to blog again.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Working with web services ]]></title>
        <description><![CDATA[ Any time I work with data stored in a location that&#39;s not on the same server
that the Drupal website is on, there&#39;s a high chance web services
[https://en.wikipedia.org/wiki/Web_service] will be involved.

Where web services fail
Any task that could ]]></description>
        <link>https://www.adammalone.net/working-web-services/</link>
        <guid isPermaLink="false">5f3394d921b8f9692ae93776</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 29 Apr 2012 09:15:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/web_services.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Any time I work with data stored in a location that's not on the same server that the Drupal website is on, there's a high chance <a href="https://en.wikipedia.org/wiki/Web_service?ref=adammalone.net">web services</a> will be involved.</p><h3 id="where-web-services-fail"><strong>Where web services fail</strong></h3><p>Any task that could affect the user's experience on the website due to latency as a result of the web service call is undesireable. With the possibility of users or content being stored externally and relying on web services to store data or verify user credentials, there would be a few key hooks upon which the user could notice slow down:</p><ul><li>User login - <a href="https://api.drupal.org/api/drupal/modules!user!user.api.php/function/hook_user_login/7?ref=adammalone.net">hook_user_login</a></li><li>User save - <a href="https://api.drupal.org/api/drupal/modules!user!user.api.php/function/hook_user_update/7?ref=adammalone.net">hook_user_update</a>/<a href="https://api.drupal.org/api/drupal/modules!user!user.api.php/function/hook_user_insert/7?ref=adammalone.net">hook_user_insert</a>/<a href="https://api.drupal.org/api/drupal/modules!user!user.api.php/function/hook_user_presave/7?ref=adammalone.net">hook_user_presave</a></li><li>Comment creation - <a href="https://api.drupal.org/api/drupal/modules!comment!comment.api.php/function/hook_comment_insert/7?ref=adammalone.net">hook_comment_insert</a></li><li>Node creation - <a href="https://api.drupal.org/api/drupal/modules!node!node.api.php/function/hook_node_insert/7?ref=adammalone.net">hook_node_insert</a></li></ul><p>Taking the example of updating an external database with node details every time a node is inserted and assuming a slow connection taking perhaps 10 seconds per node. Clicking <strong><strong>save</strong></strong> on the node page would provide the author with 10 seconds of painful waiting. An alternative to running on each node insert would of course be to make the database transactions occur on <a href="https://drupal.org/cron?ref=adammalone.net">cron</a>.</p><p>The <a href="https://api.drupal.org/api/drupal/includes%21common.inc/function/drupal_cron_run/7?ref=adammalone.net">default cron time limit</a> is 240 seconds (4 minutes), which in our hypothetical situation allows for 24 nodes to be updated externally (provided no other cron tasks need to run). What happens if we have 30 nodes that are required to be updated?</p><p>Cron will time out and won't run fully.</p><p>Now, taking the example of user login to either authenticate credentials or update user profiles. Having to query, and wait for a slow web services call has the potential of diminishing user experience. Even though any services provided are supposed to be <em><em>always on</em></em>, there is a positive, non-zero probability that it will be unavailable eventually, however good the SLA.</p><p>Do users simply get denied authentication or details not get updated?</p><h3 id="the-queue-api"><strong>The Queue API</strong></h3><p>A much underrated and less well known Drupal API is the Queue API.</p><blockquote>The queue system allows placing items in a queue and processing them later. The system tries to ensure that only one consumer can process an item.</blockquote><p>By putting all of the tasks we need to process into a queue and working through them one by one we can ensure they all get taken care of and cron doesn't time out. This means that any hook_cron implementations that we expect to take a <strong><strong>really</strong></strong> long time can be put into Queue and will be processed.</p><h3 id="thinking-outside-drupal"><strong>Thinking outside Drupal</strong></h3><p>Although Drupal is, of course, the answer to all life's problems. Sometimes it just isn't.</p><p>It can be tempting to think of Drupal as the hammer to every single nail-like problem. Sometimes for cases with huge computational requirements it's best to keep Drupal completely out of the picture. In our example of user details being managed outside of Drupal and updated every time a user logs in. A better method may be to write a script to transfer data from the target server to the Drupal server. After which, it may be imported into a database table and processed by Drupal. Alternatively, the script may directly update the Drupal database.</p><p>Not only does this cut down any latency caused by data transfer, but in the case of the web service being inaccessible, data is available locally. When service resumes, the updates too will resume and users may continue logging in regardless of external downtime.</p><p>These are but two ways of <em><em>thinking outside Drupal.</em></em> There are numerous other novel ways to work with/around Drupal and web services. It's just up to the site administrator to find the best way that suits the problem at hand.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ On outstaying welcomes ]]></title>
        <description><![CDATA[ A number of months ago I was party to a conversation about entertaining a
houseguest for up to a week on their return from foreign travels. Being the
charitable individual I am, I volunteered to play host for &quot;around three days&quot;.

Fast forward four weeks and I started ]]></description>
        <link>https://www.adammalone.net/outstaying-welcomes/</link>
        <guid isPermaLink="false">5f33949221b8f9692ae93762</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sun, 22 Apr 2012 10:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/couch_sleeper.png" medium="image"/>
        <content:encoded><![CDATA[ <p>A number of months ago I was party to a conversation about entertaining a houseguest for up to a week on their return from foreign travels. Being the charitable individual I am, I volunteered to play host for "<em><em>around three days".</em></em></p><p>Fast forward four weeks and I started to wonder at what point a welcome is overstayed.</p><h3 id="open-discussion"><strong>Open Discussion</strong></h3><p>Due to the nature of the friendship the overstayed welcome was discussed and the unavoidable predicament of having to reside with me whilst searching for their own place made it easier to extend my welcome rather than rescind it. A welcome, we found, is variable to cirucmstance and not a set value.</p><h3 id="trading-welcome"><strong>Trading Welcome</strong></h3><p>Perhaps welcomes are comparable to karma and could be a tradeable commodity. With <a href="https://en.wikipedia.org/wiki/%E2%B1%B2?ref=adammalone.net">Ⱳ</a> being my new international symbol for welcome, I feel there could be an excellent system, albeit honour bound and based, for people to track their welcome (Ⱳ) and ensure they do not overstay it.  Welcome (Ⱳ) may be curried by enduring someone overstaying their welcome and lost by overstaying yourself. I might even allow the ability to incur welcome (Ⱳ) debt provided it is paid off promptly.</p><p>With this in mind I can only imagine how much of a welcome (Ⱳ) millionaire I am right now.</p><h3 id="welcome-repaid"><strong>Welcome Repaid</strong></h3><p>With an incurred welcome (Ⱳ) debt subsequent to moving out, my venerable houseguest has accepted his fate and offered me indefinate stay at his current place of residence should I require it. I feel the most appropriate course of action is to incrementally spend my welcome (Ⱳ) until he and I are square, repaid, all debts settled.</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Drupal Sprint Weekend Wrap-up ]]></title>
        <description><![CDATA[ After the recent global Drupal sprint weekend
[https://groups.drupal.org/node/277768] I hosted, I feel it&#39;s a worthwhile task
to report back some of the successes and challenges faced by both myself and the
other participants.

Saturday

The first sprint day saw myself [https://drupal.org/ ]]></description>
        <link>https://www.adammalone.net/drupal-sprint-weekend-wrap-up/</link>
        <guid isPermaLink="false">5f33938b21b8f9692ae93719</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Sat, 24 Mar 2012 02:10:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/sprint_myview.png" medium="image"/>
        <content:encoded><![CDATA[ <p>After the recent global <a href="https://groups.drupal.org/node/277768?ref=adammalone.net">Drupal sprint weekend</a> I hosted, I feel it's a worthwhile task to report back some of the successes and challenges faced by both myself and the other participants.</p><p><strong><strong>Saturday</strong></strong></p><p>The first sprint day saw <a href="https://drupal.org/user/1295980?ref=adammalone.net">myself</a>, <a href="https://drupal.org/user/316560?ref=adammalone.net">petercook</a> and <a href="https://drupal.org/user/633216?ref=adammalone.net">rli</a> starting early at 9am with <a href="https://drupal.org/user/873966?ref=adammalone.net">jrsinclair</a> and a colleague arriving a little later. We were some of the first in the world to start sprinting as our position in the river of time is more advanced due to geographic location; in other words 'woo timezones'.</p><p>Although the fact that I organised the sprint meant that I was mentoring some of the newer contributors and ensuring sufficient levels of caffeine, I managed to rattle off a patch to <a href="https://drupal.org/node/1548204?ref=adammalone.net">this issue</a> to convert user signatures into their own field. It'll require a reroll in the near future after some changes to how the user entity is displayed in comments/nodes however. petercook started off the weekend strong with a comprehensive test of <a href="https://drupal.org/node/698236?ref=adammalone.net">this issue</a> in MySQL, PostgreSQL and SQLite before adding his own additional documentation and rli tracked down and reported a <a href="https://drupal.org/node/1937852?ref=adammalone.net">new bug</a> existing when translations and multilingual were enabled.</p><p>Numbers dwindled and we officially finished around 5pm. I decided to pursue some further issues and left the building towards 9pm after a brief hangout with <a href="https://twitter.com/johnheaven?ref=adammalone.net">John Heaven</a> and the team at <a href="http://www.comm-press.de/?ref=adammalone.net">Comm-Press</a> and <a href="https://drupal.org/user/214652?ref=adammalone.net">Berdir</a>.</p><p><strong><strong>Sunday</strong></strong></p><p>Another day another sprint and <a href="https://drupal.org/user/350381?ref=adammalone.net">rooby</a>, petercook and I sprinted until late again. I decided to take time off core to focus on some of my conributed modules. <a href="https://drupal.org/project/poll?ref=adammalone.net">Poll</a> needed a few changes to make it compatible with the latest Drupal 8 changes. I therefore spent the day both helping out <a href="https://drupal.org/node/5688?ref=adammalone.net">other issues</a>, triaging Poll and fixing some of its issues. The other guys got on with their own issues and we finished late again.</p><p>Overall it was great fun to sprint with other Drupalers and something I'd be keen on doing again with more of the DrupalACT community!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ DrupalCon Sydney 2013 ]]></title>
        <description><![CDATA[ Technically, Sydney was my first DrupalCon.

Although I have attended a number of local Drupal events and camps, this was a
little different. Not least because it&#39;s the first DrupalCon in the Southern
Hemisphere, the first in APAC and of course the first in Australia, but  also
because ]]></description>
        <link>https://www.adammalone.net/drupalcon-sydney-2013/</link>
        <guid isPermaLink="false">5f33926c21b8f9692ae936ec</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 06 Mar 2012 08:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/drupalcon_sydney.png" medium="image"/>
        <content:encoded><![CDATA[ <p>Technically, Sydney was my first DrupalCon.</p><figure class="kg-card kg-image-card"><img src="/sites/adammalone/files/styles/large/public/working-morning.jpg" class="kg-image" alt></figure><p>Although I have attended a number of local Drupal events and camps, this was a little different. Not least because it's the first DrupalCon in the Southern Hemisphere, the first in APAC and of course the first in Australia, but  also because it had a distinctly global feel. Swathes of people from all over the world attended, shared their ideas and tweeted up a storm.</p><p>Much in the same vein as <a href="http://2012.drupaldownunder.org/?ref=adammalone.net">Drupal Downunder 2012 in Melbourne</a>, subsequent to the event I'm even more bouyed up and immersed in the Drupal community. Being surrounded by so many like minded Drupal people who share both similar interests and experiences is an invigorating experience to say the least!</p><p>So what now?</p><p>With all of this additional energy, I will be volunteering my time for the next DrupalCamp Canberra and <a href="https://plus.google.com/100956791928924901165/posts?ref=adammalone.net">DrupalACT</a>. My next public contribution will be at the <a href="https://groups.drupal.org/node/285823?ref=adammalone.net">DrupalACT March meetup</a> where I'll be presenting on <a href="https://drupal.org/documentation/build/distributions?ref=adammalone.net">Drupal distributions</a>, a topic I've been interested in for a while. The full write up of distributions, how they work and best practices will be available shortly after the 14th March so until then; anticipation!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ GitHub ]]></title>
        <description><![CDATA[ For the past few years I&#39;ve been using bitbucket [https://bitbucket.org/] as my
git host for personal projects. What I most appreciate about it is the ability
to have unlimited private repositories. This is especially useful when doing
work that&#39;s either covered by an NDA ]]></description>
        <link>https://www.adammalone.net/github/</link>
        <guid isPermaLink="false">5f33922321b8f9692ae936d9</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Mon, 27 Feb 2012 21:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/git.png" medium="image"/>
        <content:encoded><![CDATA[ <p>For the past few years I've been using <a href="https://bitbucket.org/?ref=adammalone.net">bitbucket</a> as my git host for personal projects. What I most appreciate about it is the ability to have unlimited private repositories. This is especially useful when doing work that's either covered by an <a href="https://en.wikipedia.org/wiki/Non-disclosure_agreement?ref=adammalone.net">NDA</a> or is otherwise not suitable for public consumption.</p><p>This is the reason my bitbucket profile appears so empty; everything is private.</p><h3 id="enter-github"><strong>Enter GitHub</strong></h3><p>Since I actually do an amount of personal work outside of Drupal, I can't place all the fun things I make on drupal.org, much to my dismay. However, GitHub allows me a place random Perl/Bash snippets as well as entire non-Drupal projects. Since  lot of people lend credence to a developer's experience if they have:</p><ul><li>A <strong><strong>drupal.org</strong></strong> profile</li><li>A <strong><strong>GitHub</strong></strong> profile</li></ul><p>I feel it's only logical for me to follow suit with <em><em>everyone else. </em></em>Seeing also, as I'm an advocate of open source, Drupal and sharing (it's how I learned) I've <a href="https://github.com/typhonius?ref=adammalone.net">created a GitHub account</a> to which I'll push work I've done that's fit for public consumption and <a href="https://gist.github.com/typhonius?ref=adammalone.net">snippets (Gists)</a> in the hope others will be able to benefit from my work!</p><p>There are still some semi-private repos on bitbucket that need some love before GitHub can have them but until then enjoy some of the things I've placed on there already and I invite everybody to <a href="https://help.github.com/articles/fork-a-repo?ref=adammalone.net">fork me</a>!</p> ]]></content:encoded>
    </item>
    <item>
        <title><![CDATA[ Migrating from multisite to singlesite. ]]></title>
        <description><![CDATA[ In this blog post [http://post/drupal-subsites-made-simple] I detailed how easy
it was to create subsites in Drupal. Subsites have the  benefit of sharing the
same codebase so the filesize and memory footprint on the server is much lower.

But what happens when, for whatever reason, you want to move ]]></description>
        <link>https://www.adammalone.net/migrating-multisite-singlesite/</link>
        <guid isPermaLink="false">5f33907e21b8f9692ae93672</guid>
        <category><![CDATA[  ]]></category>
        <dc:creator><![CDATA[ Adam Malone ]]></dc:creator>
        <pubDate>Tue, 10 Jan 2012 21:00:00 +0000</pubDate>
        <media:content url="/content/images/2020/08/multi2single.png" medium="image"/>
        <content:encoded><![CDATA[ <p>In <a href="http://post/drupal-subsites-made-simple?ref=adammalone.net">this blog post</a> I detailed how easy it was to create subsites in Drupal. Subsites have the  benefit of sharing the same codebase so the filesize and memory footprint on the server is much lower.</p><p>But what happens when, for whatever reason, you want to move that site away and stop it from being a subsite.</p><p>Subsites can use modules and themes from a number of directories. In the same way that a standard Drupal install may use modules and themes from the module and theme directories under sites/all and sites/default/. So too, the subsite may use modules and themes from both the all folder as well as its own subsite folder as detailed in the image.</p><p>We must therefore ensure that both shared resources (such as those taken from sites/all) and subsite only resources (sites/mysubsite in the image) are copied to the new single site. Before doing any file or database altering it is strongly advised to <strong><strong>backup your filesystem and database!</strong></strong></p><ol><li>In your new single site directory grab the <a href="https://drupal.org/project/drupal?ref=adammalone.net">latest version of Drupal</a> or use drush dl drupal if you have <a href="https://drupal.org/project/drush?ref=adammalone.net">drush</a> installed</li><li>Ensure modules and themes from both sites/all <strong><strong>and</strong></strong> sites/subsite folder are moved into the sites/all directory within the new single site directory.</li><li>Take the sites/subsite/files directory and move it to sites/adammalone/files in the new directory.</li><li>If you have modules using libraries stored in sites/all/libraries, be sure to copy the libraries to the new single site directory.</li><li>Move sites/subsite/settings.php to sites/default/settings.php in the new directory</li></ol><p>This handles all changes and movements in the filesystem. Since Drupal utilises a database to store the majority of its settings we must make a couple of amendments there too.</p><p>Drupal 7 introduced the <a href="https://drupal.org/node/350780?ref=adammalone.net">code registry</a> whereby the location of files can be stored and only loaded when needed rather than on every page request. The system table stores filenames and other details about modules in a Drupal site, the registry table stores filenames, names of classes/interfaces and which module the class/interface belongs to, the registry_file table stores the name of the file and a <a href="https://php.net/manual/en/function.hash-file.php?ref=adammalone.net">hash of the file</a> to ensure the <a href="https://en.wikipedia.org/wiki/Hash_function?ref=adammalone.net#Continuity">data is up to date</a>. Because as a subsite the database would store filenames thus:</p><p>We must ensure that the new site has the filename field altered to the new filename location for all three of the system, registry and registry_file tables. The difference in the database can be <a href="/sites/adammalone/files/registry_comparison.txt">seen here</a>.</p><p>Since going through things manually is <a href="/post/necessity-automation">dog work</a> we can use a couple of SQL queries to alter everything for us.</p><ol><li>Connect to the database by typing mysql -uUSERNAME -pPASSWORD -DDATABASENAME <strong><strong>or</strong></strong> navigate to the new site directory and type drush sqlc.</li><li>Run the following queries substituting in the correct directory:</li></ol><pre><code class="language-sql">UPDATE system SET filename = REPLACE(filename, 'sites/mysubsite/modules', 'sites/all/modules');
UPDATE registry SET filename = REPLACE(filename, 'sites/mysubsite/modules', 'sites/all/modules');
UPDATE registry_file SET filename = REPLACE(filename, 'sites/mysubsite/modules', 'sites/all/modules');</code></pre><ol><li>Clear your cache with drush cc all (At this point you may need to manually clear cache tables.)</li><li>Finally change your filesystem path(s) from sites/mysubsite/files to sites/adammalone/files.</li></ol><p>You may need to alter some DNS or apache settings to reflect the changes but after following the above instructions there is nothing more in Drupal to be done!</p> ]]></content:encoded>
    </item>

</channel>
</rss>
