Monday, August 01, 2016

cheq: central_handler and event_queue

No matter how you look at it, even if it's hidden, every good browser-based JavaScript application must have a "central handler", and an "event queue". I call this necessity "cheq", as a mnemonic.

Why is this? Because a JavaScript program cannot hold onto (monopolize or block) the execution thread (the control flow of the browser's computational actions) and still make use of the essential services provided by the browser: rendering, user event handling, etc. We must pass control back to the browser, all the time, or nothing apparently happens.

But how do you do this, if you need your program to do "many things that are tied together", while passing control to the browser between each of these things? The answer, as I've said, is your own "event queue": a control channel under your control, which will persist while the browser is busy, say, rendering something for you. Every JavaScript programmer runs into this problem all the time: why isn't "X" appearing on the screen? Oh -- I didn't pass control back to the browser. This is especially obvious when you build animations.

If you have an event queue of your own, independent of the browser's event system, then you need a central_handler that manages that event_queue. Hence "cheq":


/* -----------------
 cheq.rocks 
 "cheq" means "central_handler event_queue".

 This needs to be at the heart of any browser-based 
 javascript application. It allows you to control your 
 program flow while cooperatively passing control to the 
 browser in order to render, handle events ... 

 The initial call from index.html looks like this:
     event_queue.push({'name':'initial_subhandler_name',
      'delay':2000});
     central_handler()

 Subsequent calls from inside the event look like:
     event_queue.push({'name':'some_subhandler_name',
      'delay':2000});
     central_handler()

 OUR EVENT QUEUE (so the browser regularly gets control):
  uses .push(x) to add to queue
  and .shift() to get the next event
*/
var event_queue = [];
var the_event = null;

// CENTRAL_HANDLER:
//  called by onload and setTimeout
function central_handler() {

    if (!(event_queue.length > 0)) {
 return;
    }
    the_event = event_queue.shift();

    // call event
    window[the_event.name]();

    // only loop until the stack is empty
    if (event_queue.length > 0) {
       setTimeout(function () {central_handler();},
    the_event.delay);
    }
}

// end of cheq.rocks
// -----------------


"Cheq" may be considered "the heart" of any JavaScript application, from one perspective. It's not necessarily the most useful idea for a program "heart", for comprehensibility. But ... maybe it is a useful central organizing principle. You never know until you try. So, I'm going to try. I'll evaluate this with my "smoothly unfolding sequence" approach, described at core memory, making good use of the Life Perception Faculty in the human brain as a means of judgment, and see if I can maintain "explanatory" reasonableness as well. My explorations in maintaining a good development structure, from this starting point, will be here: cheq.rocks

Wednesday, June 22, 2016

The Biology of Password Security

If you can, try to accurately recall a complete sentence that you just said, to another human being. One that wasn't very long, nor very short, nor a cliché, nor a quote. One that was grammatically correct to you. 

That sentence is more than likely unique in human history. At the very least, if you type it into Google, with quotes around it, you are unlikely to find it. Try it a few times. 

People are often under the misimpression that all human sentences must be on the internet, making up a kind of corpus of all languages. Nothing could be further from the truth. Any natural language 'corpus' is a finite set of captured sentences: they are superficial artifacts of the complex human thought that produced the sentence. Even if a corpus was, somehow, an infinite set of sentences, it wouldn't be the right infinite set, because we don't yet know what that set is.

It's not possible to get a machine to automatically generate your phrase. There is no generator for all (and only) the sentences of any human natural language, for the following very simple reason. The mechanism that produces language is in the human brain; the brain is an extremely complex biological system, and we understand some things about it, but not much. It is highly structured, but we do not know the structure, and in fact we only have a few dozen reliable hints about the structure, despite centuries of intense work by legions of linguists. Since that biological structure is a major factor for any natural language grammar, we have no worked-out grammar (syntax), in the sense of an explicit definition of an infinite set of sentences for any human language. The actual grammar is a faculty of our brain, the faculty we use to both generate sentences and evaluate whether something is grammatical. It is part of our biology, and we have no more conscious access to its detailed operation than we do about our visual system, or, for that matter, our digestive system. We must construct experiments -- testing this biological grammatical 'meter', this language faculty, in the same way we construct experiments on our visual system with optical illusions -- in order to find out things about its operation. 

This is a research initiative, and we'll all be dead before the human natural language mechanism is understood well enough to create a generator for all and only natural language sentences. 

So, it's quite safe to take any natural sentence, like this one [but not this one, of course, since it's been written down!] and use it as your password. (These are also known as passphrases).

It's also easier to remember natural sentences than all this nonsense. But it's not trivial to remember natural sentences. You need to train yourself, and learn to be sensitive to your own speaking, including speaking to yourself (we mostly use our language faculty to talk to ourselves). If you're a writer or actor you might already have practiced this facility for remembering sentences, which we often call an "ear". But anyone can do it. It's part of our genetic endowment.

If we could develop a culture of language sensitivity, we'd have far fewer problems with passwords. Those silly and unnecessary "trick" password-generators would then become a thing of the past.

One more note about infinite sets, because there's a misconception about them. There are an infinite number of infinite sets, but a very limited number of infinite sets that are human languages ... at most a finite multiple of all the people who ever lived. 

A hypothetical infinite set can be aspirational (all the sentences of English), but an actual infinite set requires a generator function. We can prove that there's an infinite subset of sentences within English ("this, and that, and this ..."), proving that the hypothetical full set is infinite. But we don't have a generator for all and only the sentences of English, or any of the other billions of languages that ever existed (assuming, again, that the upper bound is some multiple of every individual, with somewhat unique human languages of their own).

However, the joy of natural science is to discover more about the structure that is universal in all this variation ... "universal grammar" just means that part of language that is our genetic endowment. In this sense, every human language is the same. And until we understand the universal grammar, we cannot have a single complete generator for any particular natural language. Note also that we use our brains to generate what to say: language is, after all, the expression of thought. So until we understand thought, we won't be able to generate all-and-only sentences of any language. 

So, again, your real human sentence is safe from hackers. (That would have been a good one.) 

Sunday, May 22, 2016

Adding a docker service to boot on Linux

If you want a docker container, or a service that starts docker containers, to startup on a system boot, the correct method is buried in the manual for the command systemctl.

The system boot technology on GNU/Linux is shifting. Many distros are no longer relying on the Unix System V approach, with the oddly named /etc/init.d directory and the rcX-type 'run control' directories for different run levels. 

The move is instead towards systemd and upstart. I'm only talking about systemd here, and the best way to install a service that depends upon docker

Systemd requires unit files installed, in the proper way, in its /etc/systemd/system directories. Here's an example of a unit file, which starts up the mediawiki containers service mentioned in the previous post:

[Unit]
Description=mediawiki-containers
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
WorkingDirectory=/srv/mediawiki-containers
ExecStart=/srv/mediawiki-containers/mediawiki-containers start
ExecStop=/srv/mediawiki-containers/mediawiki-containers stop

[Install]
WantedBy=multi-user.target

This file is named mediawiki-containers.service, but it shouldn't be confused with any script service files, including the one referred to (mediawiki-containers). This unit file is called a service file because it refers to other kinds of services.

Clearly it must be started after docker is up.


Now, there's a directive in here to install it to the 'multi-user target'. This is a kind of run-level definition. But it adds to the trickiness of actually installing this.

I'm installing it on Ubuntu Linux 15.10, which has systemd pre-installed.

This unit file was authored by the mediawiki folks, and they put it on your system here: /srv/mediawiki-containers/init/mediawiki-containers.service ...


So all that's needed is to tell systemd about it, using systemctl enable:

# systemctl enable /srv/mediawiki-containers/init/mediawiki-containers.service

Created symlink from
/etc/systemd/system/multi-user.target.wants/mediawiki-containers.service to
/srv/mediawiki-containers/init/mediawiki-containers.service.

Created symlink from
/etc/systemd/system/mediawiki-containers.service to
/srv/mediawiki-containers/init/mediawiki-containers.service.

#

It should then startup on the next reboot.

Tuesday, March 15, 2016

Modifying mediawiki-docker: adding LDAP to the mediawiki container

The 2008 release of the 2.6.24 Linux kernel opened a new world. In particular, it supported a lightweight option for encapsulating machine configuration. 

The new kernel features -- 'namespaces' and 'control groups' -- can meet our human need for a virtual machine, without the overhead of simulated machines or additional copies of an operating system.

A 'container' is a single process, which is apparently running on its own hardware. We used to do this with a 'jail' or 'sandbox', which you could access with chroot, improved by later tools and kernel patches. The term container arose with the new, better, standardized isolation, and software that easily moves these environments around.

The premier open-source management system for containers is Docker:

If you manage or build applications and services on Linux, and you want to save your work, you need Docker. Once you see what it does, you'll want it.

Let's look at using Docker for one application. 

MediaWiki is produced by WikiMedia. You use MediaWiki everyday, because it's the free software behaind wikipedia.org.

The good people at Wikimedia created an experimental install script for a complete, running mediawiki docker deployment. It installs four interacting docker containers onto one machine -- with one line. 

On a Ubuntu 15.10 server, you can type:


curl https://raw.githubusercontent.com/wikimedia/mediawiki-containers/master/mediawiki-containers | sudo bash


This is a bash script, with both install and stop/start services, within the github project wikimedia/mediawiki-containers. If you look at the file, which is called mediawiki-containers, you'll see it pulls four docker projects from the docker hub:

  • tonistiigi/dnsdock
  • mariadb
  • wikimedia/mediawiki
  • wikimedia/mediawiki-node-services

And then it builds 'images'. What are they?

An image is a docker project that can be built, so it can then be run, as a container

The centerpiece of a docker project is the Dockerfile. It directs the construction of the image. It downloads resources to build the image. The images can be run as containers extremely quickly, because they are already built. If you try to 'docker run' a project that has not been built, docker will download the project and build it into an image for you.

If you want to modify a docker project, you might want to pull your changes forward, to the Dockerfile, as much as possible, or, in this case, perhaps to the mediawiki-containers service file (which gets installed in /srv/mediawiki-containers on your machine). Saving them in an image tends to get lost. It's nice to destroy containers when you stop them, and then run from a fresh image.

As an example: I needed LDAP in my docker mediawiki container. One way to do this is to modify PHP on the running image (adding php5-ldap).

# docker exec -it mediawiki bash
(container id)# (make your changes)
(container id)# exit

# docker commit -m "with my modification"

But that leaves you with unnecessarily large images to keep track of. 

Why not pull the change forward to the Dockerfile? mediawiki-containers 'pulls' the docker project for mediawiki, whose github source can be found here:

https://github.com/wikimedia/mediawiki-docker

Download it (with git, or in a zip, whatever) and add to the docker file.


# cd
# wget https://github.com/wikimedia/mediawiki-docker/archive/master.zip
# apt-get update
# apt-get install unzip
# unzip master
# mv mediawiki-docker-master mediawiki
# cd mediawiki
# emacs Dockerfile
Added php5-ldap to the appropriate spot
# docker build -t greg/mediawiki .
# docker images
# cd /srv/mediawiki-containers/
# emacs mediawiki-containers
Changed run to greg/mediawiki
# service mediawiki-containers restart
# journalctl -f -u mediawiki-containers -n 0 | sed '/: Done in /q'

If you add phpinfo(); to your /srv/mediawiki-containers/data/mediawiki/LocalSettings.php file, for a moment, you'll notice that LDAP is enabled. 

You now just need to configure the LocalSettings.php file to require the mediawiki LDAP extension and set wg values for your network.

Tuesday, December 01, 2015

Accessibility Audit Transparency ... how you can demonstrate your WCAG 2.0 conformance effort, and encourage others

WCAG 2.0 is the current w3c standard for web accessibility. Section 508 of the ADA, the Americans with Disabilities Act, is incorporating WCAG 2.0 standards into law. If you develop a web application, and get federal money or just want to do the right thing, this is important.

There are many tools that help developers to address accessibility problems with their web interfaces. Mostly, these problems are html tags and attributes that either interfere with the operation of assistive technologies, or they are parts of features that aren't accessible through these technologies. This is fixable. A subset of HTML5 known as WAI-ARIA can make WCAG conformance much easier, and most automated audit tools suggest using aria attributes.

If you use an audit tool for development, you can make progress on conformance. But how can you show that you're making progress? 

What? Why would we care about that? 

Because web development on a live site is incremental. For a complex web application, these audit tools can point out hundreds of WCAG 2.0 problems per view or page. Depending on your team's resources, it could take months or years to address all automatically-auditable aspects of your application.

So, say that you want to, essentially, say "We care! We're actually working on it! Look!" How can you demonstrate it? Well, hypothetically, if someone on your team, who is technical, has some extra time, they can keep a spreadsheet of the fixed problems, just in case anyone asks. Or, perhaps your bug-resolution process is public, and you've logged accessibility issues as bugs: that can demonstrate your continuing efforts.

Or, you can simply record your audits as you go, with this free service, using open source software, which I'm announcing here: wcag-audit.

As you can see from this example, which is run on wcag-audit itself, I downloaded our modified Google Chrome Accessibility Audit Extension, and ran the audit, which recorded the results. Then I fixed one accessibility problem, and ran the audit again. Then I fixed another ... and I was done (it's a simple webpage).

The chrome extension has only been modified to send the summary of audit results to the wcag-audit site for recording purposes. 

You can then hand out these wcag-audit links that reference your url's, to show your progress. Or, someone else may check your progress, or your deterioration, at any time -- keeping us all honest! Let's call that 'crowd-auditing'. In any case, your work becomes a matter of public record, without any extra effort to save evidence or create reports.

This small effort emerged out of work for the University of Oregon, whose many web applications include products for K-12 schools produced by the College of Education. We want to demonstrate that we care, that we're working on accessibility, and to publicly invite everyone else to reduce their automatically-auditable accessibility problems to zero. Let's make a friendly competition out of this. Of course, those audits are not all you need to do to provide a genuinely good web experience, for people using assistive technologies, but it's an important start.

The reason we used a Chrome accessibility audit extension, versus some other form of automation: we didn't want to deal with the state of anyone's program. It's up to you, when you audit your application, to know what the user is viewing, whether they are signed in, etc.

It begs the question, to be dealt with in the future, about what to do with webapps that don't track their important states with their hrefMost do not. You can keep track of the timestamp so you know which audit is yours, but the different application states will not be tracked publicly in a differentiating fashion, if wcag-audit doesn't know what the states and their values are. 

So, I would like to propose the following published states standard for webapps, SPA's, etc.: 

* hide a div with an id whose value is the href you do use, so it can be found.
* as the value of this div's html content, provide a JSON object of the safe-for-public-view names and values you need to differentiate the states of your browser-based application. 

In the next version, we will look for this hidden div. In this way, you can continue to hide your state values from the user, if you like. But you will need to start keeping track of states before you publish them in this way. So, if you do not track internal state, defined anyway you like, please start now -- in my opinion, it's a critical aspect of the future of programming, if we are to make any progress. Tracking internal states can make the inspection of program operation easier, and is a foundation for the explanation of the ideas used by particular developers to create their applications.

I should add, although it's obvious, that other kinds of audits can be carried out this way, i.e. public crowdsourced auditing, whether they are automated or not. Some already are, more or less: that's what a reputation or review is. All that's important here, is that the audits are public

These audits are also a kind of proof of system status -- we do this already with uptime, for example. Why not regarding other claims? Privacy, security, transparency, etc. -- all can be audited publicly, for real-time systems that commit to these values.

Tuesday, September 29, 2015

Hitting Shibboleth from a Cloud Service

Say you use a mobile browser to hit a web resource which is protected as an SP (service provider), authenticated by a Single Sign-On service's IdP (Identity Provider). Say you authenticate, get your resource, and then switch from wifi to your cellular data provider. If you then refresh the resource, this will hide your SP session (so, it will send you back to the IdP login page), because your IP has changed -- although your original session will probably still work if you switch back to your wifi network. This is the system behavior if your SSO uses the default settings.

But you can configure Shibboleth SP to ignore the changing IP address, by adding this property to the Sessions element in /etc/shibboleth/shibboleth2.xml:

consistentAddress="false"


so that it looks something like:


<Sessions
lifetime="28800"
timeout="3600"
relayState="ss:mem"
consistentAddress="false"

checkAddress="false"
handlerSSL="false"
cookieProps="http">


... and then restart shibd.

What does this have to do with a cloud service? Well, if you tried to turn your Google App Engine cloud-based application, on appspot.com or some configured domain, into a User Agent (UA), in the UA/SP/IdP trinity, and you check the logs on the SP or IdP, you will notice that as a UA Google changes IP addresses constantly. So, as said above, you need to turn off the SP's hidden default 'true' value for consistentAddress.

If you want your SSO system to work in this scenario, you'll probably also need to ignore different IP's on the IdP. This is a little harder: you need to re-install shibboleth (without overwriting the existing configuration). But first -- go to the directory you installed the Shibboleth IdP from, then go to src/main/webapp/WEB-INF, and edit the web.xml file. Add the following within the filter section, right after the end of the definition of the filter class:

<init-param>
  <param-name>ensureConsistentClientAddress</param-name>
  <param-value>false</param-value>
</init-param>


Then you need to: 
  • stop tomcat
  • run the install script again -- make sure to answer "no" to "Would you like to overwrite this Shibboleth configuration?"
  • start tomcat
Then you can use HTTPClient or your favorite user-agent framework to interact with the Shibboleth IdP and SP from Google App Engine or Google Cloud Platform.

Shibboleth IdP: "Error Message: SAML 2 SSO profile is not configured for relying party"

Say that your service has suddenly stopped letting you sign on, and you get this error message: "SAML 2 SSO profile is not configured for relying party".

This is an 'accurate' but confusing bit of text for people who are using an SP (Service Provider) which suddenly can't authenticate against an IdP (Identity Provider) in the Shibboleth SAML 2 Single Sign-On system. 

Although there are other possibilities, the probability is that your SP's metadata has expired. 

The metadata (which is an XML response available through a URI, which, after it is fetched from the SP, sits as an XML text file on the IdP) has a 'validUntil' property that you can check, if you have access to the metadata directory of the IdP. If you aren't the admin of the IdP, but are the admin of the SP, you need to contact the IdP, so you can get new metadata to the them, and set up a regular pull of metadata.

If you're a user, you need to contact the SP, the 'service provider', i.e. the administrator of the application you were trying to use, or the web resource you were trying to access.

So, why this error message? If you read the debug output in the logs, it is possible to comprehend. The '/conf/relying-party.xml' file has entries that point to files in the metadata directory, one for each 'relying party', i.e. SP. If the metadata file is no longer valid, then there's no configuration information for the relying party. Just about any configuration problem with the SP could trigger this error, but expiration is clearly the most common occurrence. 

So, I'd recommend to the Shibboleth team that they spend a moment and provide a more detailed reason in this error message ... this increase in detail will actually make it easier to see what's going on, from a non-expert's standpoint. Because 'profile' and 'relying party' may be 'technically correct' here, but these terms don't provide enough hints for human comprehension. 

This is a pretty pervasive issue with Shibboleth ... the software is not written with sufficient quality explanation. This makes it far less useful than it deserves to be. If the meaning of the messages and operations and technical directions of a human-made system are not explained sufficiently and simply enough for smart people, outside of the culture of developers for a project, then the meaning of the software is still locked within the minds of that team, and will die with them, making the software useless. 'Good code' is not enough. It's not good, in fact, if simple things that people need to understand take months of unnecessary study. I would advocate for a consortium to provide grants to some good technical writers to try again at documenting Shibboleth for the rest of humanity.

Friday, August 07, 2015

The Android, Cordova, Ionic primer

Do you want to develop Android apps? 
Don't do it under Microsoft Windows: Android Studio is difficult-to-impossible to use on Windows, mostly because of debugging drivers. A Mac or Linux machine is your best option. 

Do you want to also develop Apple iOS apps?
You'll need a Mac. No choice there. Apple insists. 

Do you want to use Unix tools? 
A Mac or Linux machine is your best option.

So, to shorten a long story: 


Modern mobile development 
is easiest on a Mac.
_____
_____

A Primer for Three kinds of 
Android Development (on a Mac)

You'll need an Android device, and a Mac. I'll assume you know how to use the terminal app on a Mac.

If you keep to the following lesson order, you shouldn't run into trouble, since cordova and ionic development depend on the presence of Android Studio. We will:

1. deploy a little native Android Studio app to your Android device
2. deploy a little cordova Android app to your Android device
3. deploy a little ionic Android app to your Android device

------
You can skip this blue text, which is an aside, on the subject of explanatory software:

Instead of this sequence/lesson-order, which meets many technological dependencies silently, we could employ a set of productions. Each production could be thought of as either [target: ordered list of dependencies] or [non-terminal: ordered list of terminals and non-terminals]. Using this, you could build your own sequence/lesson-order, by starting with any target:




ionic-run: ionic-install ionic-start ionic-build
cordova-run: cordova-install cordova-start cordova-build
android-java-app-run: android-install android-java-app-start android-java-app-build
ionic-installcordova-install 
cordova-install: xcode-and-clt-install android-install 
android-installjava-install android-studio-install 

... but the possible paths haven't been converted into clearly written sequence/lesson-orders, so it's easiest to follow my given (1,2,3) sequence.


------


In general, my little lessons are like this:

a) do the minimum necessary to install and configure
b) create the smallest prepackaged app
c) make a change to it
d) build, emulate, serve
e) put it on your device

... planned with the idea that iterating (c,d) and (c,d,e) will let you get started on developing the app you want. A fuller lesson plan would include debugging, finding the appropriate framework or functionality at the appropriate time, etc.

------

1) deploy a little native Android Studio app 
to your Android device

-> Install the Oracle Java JDK

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Download the most recent JDK for your machine.
Install.

-> Install Android Studio

https://developer.android.com/sdk/index.html
Download.
Install standard.

-> Use Android Studio

Launch Android Studio.
Start a new project.
Click the blank app.
Let it build.
Edit the value of the Hello World string's resource.
Build and run the blank app on the emulator.

-> Deploy to your Android device

Under Settings, at the bottom, find the Build Version.
Tap it seven times. After a few taps, it will countdown your taps.
A Developer Settings section then appears in settings. 
Open that section, and enable usb debugging.
Plug the device into your mac.
You may need to agree to pair the two at various stages in this process.
Go to Android Studio.
Click the 'Run' (the green right-arrow at the top).
Choose your device.
The little completely native app should now deploy on your phone.
Play with deleting it and deploying again etc.

2) deploy a little cordova Android app 
to your Android device

-> Get some developer tools

Get an apple id. 
Sign in to: 
https://developer.apple.com/downloads/
Download Xcode and the command line tools.
Install.
(If you don't have a developer account, you can download Xcode for free from the App Store, install it, and run any of a number of commands in your terminal -- like 'gcc' -- and your Mac will then help you install the command line tools.)

Get the nodejs download from
https://nodejs.org
and install.

Make sure that node, npm and git are installed:
node -v
npm -v
git --version

[And for below, note that, if you find that the use of sudo is impossible for some reason, for example, running from inside of an IDE like WebStorm, you just need to claim ownership of all the permissions in your home directory's .npm directory: sudo chown -R $(whoami) ~/.npm ]


-> Install Cordova

sudo npm install -g cordova
sudo npm install -g ios-sim (useful later)

-> Use Cordova

make a directory for your cordova apps:
cd
mkdir cordova
cd cordova
create the sample cordova HelloWorld app:
cordova create myhello com.example.hello HelloWorld
cd myhello
cd www
edit index.html with your favorite editor.
emacs index.html
change the string ‘device is ready’ to something else.
Now add platforms to the project:
cordova platform add iOS
cordova platform android
Build your android app (note that run does this automatically):
cordova build android
Run it in the emulator:
cordova emulate android
Run it on your device:
cordova run android

3) deploy a little ionic Android app 
to your Android device

-> Install Ionic

sudo npm install -g ionic

-> Use Ionic

make a directory for your ionic apps:
cd
mkdir ionic
cd ionic
create a sample ionic 'sidemenu' app (other options are 'tabs' and 'blank'):
ionic start myApp sidemenu
cd myApp
install the local builder.
sudo npm install -g gulp
ionic platform add android
ionic build android
Look at it in the browser:
ionic serve
or
ionic serve -f chrome
Select ‘2’.
Browse to the given localhost url to try the app in the browser.
Then go back to the terminal and:
q
You can also see it in the emulator. 
Cd to the root directory of the project folder, then:
ionic emulate android
ionic run android
Now make a modification:
cd www
cd js
And edit app.js
Change some of the strings in the JSON object that populates the app.
Run on your device (it will build automatically). 
Cd to the root of the project folder, and:
ionic run android
You should see the app on your device.


-------
Note:

Apparently Cordova by default blocks http requests. Just run this command:

ionic plugin add cordova-plugin-whitelist

And you're good to go.

Thursday, July 23, 2015

How to fix the window resize problem with Ubuntu Linux in VirtualBox on Windows

This is a popular enough post, that I thought I'd reprint it here:

Oracle supports a free VM hypervisor called VirtualBox. I'm using version 4.3.28 on Windows 7. I want to put Linux on it. Ubuntu is one of the few Linux distros that offers an .iso image download. I download Ubuntu 14.04.02.

Out-of-the-box, the install comes with a problem. The display is too small, 640 x 480, to even see the entire display settings screen. There are no options for making it bigger, or for making it resize automatically.

Systems are so unbundled, dependency building is so unreliable, conflicts are so common, that it's unclear if my solution would work for even a slight variation on the above situation.

But here's my solution. When I found it, I deleted the virtual machine and tried the same installation procedure again, just to be certain.

1. install Ubuntu on the virtual machine
2. at the top of the VirtualBox guest operating system desktop window, click Devices->Insert Guest Additions CD Image …
3. you will be prompted to run the Ubuntu Guest Additions CD. Do it.
4. then shutdown from within the virtual desktop, and reset from the dashboard. Resizing should now work.

The dozen other solutions are probably just out-of-date. But, again, software distribution is in such a irresponsible state that it is not possible to know this for sure, without an extensive research initiative.

Wednesday, May 20, 2015

Android Studio 1.2 problems

Every IDE is terrible. I won't explain my reasoning here. If you're interested, see my computing philosophy.

But the new Android Studio is not the worst of them. As one would expect, there are immediate snags and bugs. I'll document a few, mostly as problem-resolution pairs.

After downloading, installing, and running, you need to manually update the studio: click the little android-with-down-arrow icon (the SDK Manager) and see if anything needs to be installed. Google will probably fix some of the problems below, in future updates. Including this installation updating issue.


Task -- Create a new project only with default values. Your first new app project will have several problems at birth:


problem: a “Rendering Problems” error -- “the following classes could not be instantiated” ...
solution: you need to change the build.gradle classpath to 1.2.3, and rebuild. Then simply close the semi-opaque error window.


problem: “warning: the project encoding (windows-1252) does not match the encoding specified in the gradle build files (utf-8)”
resolution: there’s a link provided within this error window: “open file encoding settings”. Click it. For both the IDE and Project, select “utf-8”.


Task -- build and run:


problem: running an app in the emulator only displays the locked screen
resolution: click the lock and drag upwards


problem: (on Microsoft Windows) emulator window off the top off the screen
solution: click on emulator, then press Alt+Space, then choose Move, then move with arrow keys


Task: add more "activities" (i.e., “screens” or "pages"):


cool thing: try Tools->Android->Navigation editor. Click to add an activity.
but: deleting activities take some attention. You need to check any appropriate XML manifests, delete related code, and refactor.


Task -- add an activity in navigation editor:


problem: one of the least intuitive aspects of the layout editor (which you get to by double-clicking activities in the navigation editor), is that you cannot simply type new text after double-clicking text. The these have to become string resources.
resolution: double-click the box with the text in it. hit the little ellipsis (...) A dialog shows up. Create a new resource (at the bottom), and then say 'ok'. This will take care of it.


Task -- add a radio group:


problem: orientation of radio group container layout.
resolution: 
1. click on the actual radio group on the right, in the component tree
2. look at the properties box, below the component tree
3. change “orientation” property to “horizontal”


problem: layout width & height default is set to “match_parent” in the properties box. making it huge. note this default may be inherited from the layout type. But is still silly.
resolution: change “layout width” & height to “wrap_content”


problem: check a radio button by default
resolution: it's the radio button property ‘checked’. Of course, the programmatic solution would be different.

problem: the palette disappears from layout editor
solution: there’s a tiny little arrow to the top left. click exactly upon it. a “palette” button should show up. then you can double-click on the .xml tab, and if the palette button is still showing, you should be able to mouseover and get a window adjustment arrow on the left, and then you pull it right, and the palette shows up again.


problem: error “resource id cannot be empty string at ‘@+id/’”
solution: change it to @id+/test … in general, id’s map code to activities or fragments in the XML.


Task -- add a web view:

https://developer.chrome.com/multidevice/webview/gettingstarted

Add the webview from the palette, in the layout editor.

Set height and width with down arrow in properties, to “match_parent”

Create an id, like “webview1”:
android:id=”@+id/webView1”

Then edit your mainactivity java file.

You add the url in java, in mainActivity:

public class MainActivity extends Activity {
   private WebView mWebView;
   @Override
   protected void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);
       mWebView = (WebView) findViewById(R.id.activity_main_webview);
       mWebView.loadUrl("http://beta.html5test.com/");


If you don’t want the browser to launch, before the URL put:

public class MainActivity extends Activity {
   private WebView mWebView;
   @Override
   protected void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);
       mWebView = (WebView) findViewById(R.id.activity_main_webview);
       mWebView.setWebViewClient(new WebViewClient);
       mWebView.loadUrl("http://beta.html5test.com/");

… and if the page you’re loading has javascript, you’ll need to add this:

public class MainActivity extends Activity {
   private WebView mWebView;
   @Override
   protected void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);
       mWebView = (WebView) findViewById(R.id.activity_main_webview);
        // Enable Javascript
        WebSettings webSettings = mWebView.getSettings();
        webSettings.setJavaScriptEnabled(true);
       mWebView.loadUrl("http://beta.html5test.com/");


problem: the IDE can’t find classes, like WebView, WebViewClient, etc.
solution: click and hover, and accept the IDE’s “alt-enter” option to add the appropriate import.


problem: webpage doesn’t load. get “Failed to load resource: net::ERR_CACHE_MISS …”
solution: in the primary manifest, AndroidManifest.xml, put this line anywhere above the application:



easy misunderstanding: if you use relativeLayout, the ‘below_layout’ property in the activity XML file may be confusing. It refers to the xml id of the fragment laid out above it.


problem: constant emulator runtime errors. “glUtilsParamSize: unknow param” (sic)
solution: live with it. you can get rid of them by stopping the AVD (android virtual device) manager, editing your virtual device, unchecking “Use Host GPU”, killing and restarting the emulator.
but: it seems to actually break (or intolerably slow-down) the emulator
future possibility: some kind of error message filters in Android Studio. Which, for these errors, should be in place by default. I expect this will change in some update, since they’re really annoying.


problem: run doesn’t start the app the first time. it starts the emulator.
solution: run twice


Task -- Installing git:


problem: (Microsoft Windows) When you enable Android VCS with git, it can’t find git.exe, even if you fully installed git with defaults. And the Windows start menu search can’t find it.
solution: I found it here: c:\program files (x86)\Git\bin\git.exe