Category Archives: Uncategorized

VisualVM for Java 11 in Ubuntu

If, like me you’ve recently moved Java 8 to 11, you might noticed Java VisualVM has gone. It now needs installing separately and annoying still needs Java 8 to run. So on a system with only J11, you’ll need to:


sudo apt-get -y install visualvm
sudo apt-get install openjdk-8-jdk
visualvm --jdkhome "/usr/lib/jvm/java-1.8.0-openjdk-amd64"

Advertisements

Ubuntu 16.04 xrdp Remote desktop “password failed”

If you remote desktop into Ubuntu 16.04 you may have found that recently you are unable to do so with a misleading “password failed” message. In fact, if you actually got the password wrong it would say “login failed” earlier in the process.

E.g. when logging into sesman-Xvnc you see

tcp connected
security level is 2 (1=none, 2=standard)
password failed
error - problem connecting

This came about as a security update, is a known bug (https://bugs.launchpad.net/ubuntu/+source/xrdp/+bug/1811122)
and downgrading currently works as a workaround:


sudo apt-get install xrdp=0.6.1-2

PPK to PEM – PuTTYGen and OpenSSL

I’ve got a ppk file (a file created by PuTTYGen) that I use to SSH into my AWS linux boxes. Since I don’t use Windows boxes very much, I always forget the steps required to get the key in the correct pem format for obtaining the Windows password via the AWS console for newly spun up machine.

To do this:

  • open PuTTYGEN
  • load the ppk file
  • Select Conversions > Export OpenSSH Key and save e.g. temp.pem
  • Use OpenSSL to handle the decryption e.g. “openssl rsa -in temp.pem -out key.pem”
  • The output e.g. key.pem is now in the correct format to be pasted into the AWS console.

Making things open with a Raspberry PI and AWS Lambda

One part of gaining out-of-hours access to my office is via a remote-controlled gate. I wanted everyone to be able to open the gate without the need to purchase a £10 fob and remember to have it with them. I embarked on making a quick and dirty web/phone accessible form that would initiate the opening of the gate provided someone submitted the correct password.

Building the circuit

The first step towards this was to buy a single fob and get it coded to open the gate. Next I needed to operate the fob via another circuit i.e. I needed to use a relay. Now I had to start googling for a circuit design that my tiny mind could understand and I eventually found this: https://openhomeautomation.net/control-a-relay-from-anywhere-using-the-raspberry-pi and started looking into ordering the bits. After a couple of weeks only a few of the bits had arrived from Amazon and I was bored with waiting. After contacting my brother who works in a school, I ended up making the following substitutions:

  • 5v SPST > 6v SPDT relay
  • 2N2222 > BC548 transistor

Apart from that everything remained the same as the link and I built the circuit.

Instead of the LED I opened the pre-programmed fob and found the push button. I proceeded to connect one side of the button to the COM pin of the relay and the other side to the NO (normally open) pin. I then dropped into Raspbian and activated the PIN in Python and made sure the fob’s light came on. You’ll easily be able to find how to do this by googling or refer to my Java client later on and avoid Python altogether.

Building the “server”

I would be able to give the RPI internet access from the office but it would only be able to reach out and poll the server periodically rather than have anything pushed to it. Aside from hosting a little HTML form somewhere, I simply needed 2 endpoints, one telling the server that someone wants to open the gate and one for the RPI to check if it needs to open the gate. In addition I needed a way (in-memory/file/database) to hold the state or the switch.

Using AWS Lambda combined with API Gateway is a quick way to publish a bit of code without provisioning any compute resources and have it easily accessible via a URL. DynamoDB with a single table would suffice to store state. I won’t discuss the database any further than saying it had a table called “fob” with a column called “open” which, when someone wanted to open the gate, would be given a value called “requested”. You might see these name as constants in the code later. If you create something like this, make sure that you give your Lambdas a role that provide access to the resource.

I used Java 8 to create the Lambdas, the first one shown below expects a single password param posted, checks it and when correct, adds the “requested” value to the “open” column of the “fob” table.

public class RequestOpenLambdaFunctionHandler implements RequestHandler {

	final AmazonDynamoDB ddb = AmazonDynamoDBClientBuilder.defaultClient();
	
	public static final String TABLE_NAME = "fob";
	public static final String COLUMN_NAME = "open";
	public static final String PRESSED_NAME = "requested";
	
	public String handleRequest(Request input, Context context) {
		String body = input.bodyjson;
		if(body.split("=")[1].equals("password")) {
			HashMap itemValues = new HashMap();
			itemValues.put(RequestOpenLambdaFunctionHandler.COLUMN_NAME, new AttributeValue(RequestOpenLambdaFunctionHandler.PRESSED_NAME));
			ddb.putItem(RequestOpenLambdaFunctionHandler.TABLE_NAME, itemValues);
		}
		return "";
    }
}

The second Lambda is used to check if a request has been made for the gate to open, it returns the string of “open” or “closed” depending on the state. We’ll need in API Gateway.

public class CheckOpenLambdaFunctionHandler implements RequestHandler {

	final AmazonDynamoDB ddb = AmazonDynamoDBClientBuilder.defaultClient();

	@Override
	public String handleRequest(Object input, Context context) {
		HashMap itemValues = new HashMap();
		itemValues.put(RequestOpenLambdaFunctionHandler.COLUMN_NAME, new AttributeValue(RequestOpenLambdaFunctionHandler.PRESSED_NAME));
		
			if(ddb.getItem(RequestOpenLambdaFunctionHandler.TABLE_NAME, itemValues).getItem()!=null) {
				ddb.deleteItem(RequestOpenLambdaFunctionHandler.TABLE_NAME, itemValues);
				throw new RuntimeException("opening");
			}
			
		
		return "closed";
	}
}

In API Gateway you’ll need two resources; “check” with a GET method and “open” with a POST method. In the POST method you’ll need a mapping of type “application/x-www-form-urlencoded” in the Integration Request section. You can then select the provided default “Method Request passthrough” template. You’ll notice from the Lambda, I decant the the request into a POJO (I won’t bother including here) of type “Request” that has a public parameter called “bodyjson” (can’t use a hyphen for a variable name). I therefore removed the hyphen in body-json from default template.  As we know there will only be one param, we can get at it quickly and easily as per the first Lambda example.

The quickest way and with the least overhead I could think of  to signal to the RPI client when the gate should open was to toggle the response code. 200 (OK) would signify opening the gate and 418 (I am a teapot) would signify that the gate should remain closed. In order to achieve this, just as we mapped the request, we’re going to need to interrogate the response and change the status code accordingly. This is done through the Integration Response section on the outbound path of API Gateway. We need two rules: 200 based on regex’ing the string of opening coming from the Lambda Error Regex and 418 for everything else (default mapping set to yes). Yes, it’s horrible but you’ll notice I’m throwing a runtime exception from the code when the gate should open as I’m trying to make the code as small as possible.

Once you publish your API and get its address you can head back to the RPI and write your client.  Pick your lanuage of choice (Java again for me although I did one in Python as well), create a loop, pick a polling interval and work out what pin you need to activate to bring the fob circuit to life. A simple Java example is shown below.

 

public class FobServer {
	public static void main(String[] args) {
		final GpioController gpio = GpioFactory.getInstance();
		final GpioPinDigitalOutput pin = gpio.provisionDigitalOutputPin(RaspiPin.getPinByAddress(4), "relay", PinState.LOW);
		pin.setShutdownOptions(true, PinState.LOW);
		String address = "https://yoururl/check";
		while (true) {
			try {
				Thread.sleep(8000);
			} catch (InterruptedException e) {};

			try {
				URL url = new URL(address);
				HttpsURLConnection con = (HttpsURLConnection) url.openConnection();

				if(con.getResponseCode() == 418) {
					continue;
				}
				//Open!
				System.out.println("Opening");
				pin.pulse(2000, true);

			} catch (MalformedURLException e) {
				e.printStackTrace();
			} catch (IOException e) {
				e.printStackTrace();
			}
		}
	}
}

No we can all use our phones to open the gate 🙂

AWS Certified Solutions Architect Associate – Exam Guide

For the most part, Amazon’s CSA exam is grounded nicely in tasks and scenarios that you are very likely to encounter in the real world as you embark on architecting solutions using AWS services. For this reason, there is no better way to learn than logging in and getting some hands-on experience. You are however unlikely to be able to do this for everything you need to know so I’ve included the process I followed to help get myself 85% on the exam.

Firstly, and only in my opinion, the official practice exam is a waste of time (and money if you don’t have a voucher). It’s short, unable to be paused and you cannot review the questions after it has been completed.

I used A Cloud Guru (https://acloud.guru/) as the first port of call for getting a good foundation. I purchased both the Solutions Architect and SysOps courses and found both struck a good balance between providing both tips for the exam and, more importantly, making sure that you are actually functional in certain areas like VPC creation etc. rather than just knowing the theory. Their courses are frequently updated to take into consideration new questions that are beginning to appear in exams and their forums are a good place get an even more current view of what others are encountering.

Where I found A Cloud to be lacking was the practice questions. Although I’ve never been a fan of focussing too much on practice questions during previous academic endeavours, here I found it important. Exposure to the format and style of the exam should definitely be experienced. There are common formats that come up again and again worded slightly differently each time; when first encountered they cause panic and yet, subsequent times they’re easy points. I found the A Cloud questions throughout to be very easy and the final set at the end, very difficult. I now felt I had the foundation, I just needed to practice exam technique.

If you can look past the instances of poor English and a few questions that are simply wrong, Whizlabs (https://www.whizlabs.com/) provides a wealth of questions in a very similar format (not identical) to the actual exam. I would suggest purchasing and going through all the practice quizzes including the section questions at the end. Don’t worry too much about doing them under exam conditions (re:time) and pause the quiz as and when you need. To avoid simply learning to regurgitate answers, if you find a hard question, instead of trying to muddle through, go and research the area then come back and tackle it before moving on. I did this a lot and interestingly I didn’t find the bad English too off-putting; it’s often the case in the actual exam that I only fully understood the question after examining the answers. Whizlabs was like this as well but for completely different reasons 😊

In addition to the two resources above I read FAQs for the services likely to come up. I did not read any white papers.

Play Framework 1.x Ubuntu Service

Recently I’ve had to move an old Play web application that was previously running on Windows as a service under YAJSW to Ubuntu in AWS. It’s been a long time since I’ve done much in linux and in that time, Ubuntu has apparently moved from init.d to services. After reading a bit about services I created the following /etc/systemd/system/play.conf:


[Unit] 
 Description=Your service name
 After=network.target 

 [Service] 
 Type=forking 
 Environment=JAVA_HOME=/usr/lib/jvm/java-7-oracle/jre 
 Environment='_JAVA_OPTIONS=-XX:PermSize=512m -XX:MaxPermSize=1024m -Dprecompiled=true' 

 ExecStart=/opt/play/play start /yourproject --%yourprofile
 ExecStop=/opt/play/play stop /yourproject  --%yourprofile

 RestartSec=10 
 Restart=always 

 [Install] 
 WantedBy=multi-user.target

This worked first time!

Starting at the top, following the description we specify the service to run after network configuration has taken place. The service itself is “forking” so systemd considers the service started once the process forks and the parent has exited. My Ubuntu AMI has multiple Java versions so we specify that we intend to run Play under 7. Because of the the different ways of running play applications e.g. launching through Python or via Java directly or as a wrapped service, it’s sometimes confusing as to how to pass the necessary properties into it. While I’ve left the profile e.g. prod, stage etc. to the start and stop execs, I’ve added everything else that I require to “_JAVA_OPTIONS” including settings for perm gen memory and specified that I require Play to use the pre-compiled templates. Next are properties related to restarting the service. Finally we have the install section where basically specify that the service should be included at about run level 3.

This is the basic setup but for security don’t forget to run the service as a specific user using User=, Group= and appropriate UMask.

CLR Approach to SQL Server Natural Language Sorting

If you’ve ever dealt with user facing lists, then it’s likely at some point you’ll have wanted to sort them. You might have found that alphabetical sorting either in your code or SQL meets your needs. If, however you are dealing with item titles that have some implicit order defined by the user in their text, you might want a sorting implementation that better meets user expectation. Consider the list:

  • “1. Title One”
  • “2. Title Two”
  • “10. Title Ten”

Sorted alphabetically, you’d see “10. Title Ten” as the second entry in the list which would not meet the user’s intention of it being last.

It won’t take you long to find a natural language implementation in your language of choice and get it running in your code. In my case, this involved grabbing a Java implementation. I’ve had one floating around in my code for a long time but won’t post it in entirety here since I can’t fully trace its provenance. Using it in my code gives the desired result but as we’re going to see, sometimes this is not where you want sort.

When you’re dealing with long lists, you might encounter the need for pagination and in doing so, realise that you do not wish to load all data from the database before you apply the sorting. Depending on the framework you’re using, you’ll possibly have some nice support for pagination which boils down to running a TOP or OFFSET-FETCH SQL statement or equivalent. In such cases you’re likely going to need to do the sort on the database yet still meet user expectations.

Initially you may look at either:

  • Finding a function that can implement a version of your sort in native SQL.
  • Add a new column to your table and maintain it in code whenever new rows are updated or inserted. This column should be a number representing it’s order amongst others.

I discounted the first option due to performance combined with the fact that the sorting had been tested “in the wild” for some time and (after a bit of poking) the SQL I googled didn’t match up very well.

The second option worked for a period of time very nicely; the existing Java comparison code was used in combination with a binary search to locate its position amongst the other rows. The problem was that I had to ensure the sorting code was run after every insert/update which meant knowing when every insert/update was happening and ensuring that all rows were being manipulated through the application and not directly via something like Management Studio. Due to the complexity of the application, I couldn’t even be totally sure I had covered all programmatic access.

Eventually I started looking at MSSQL Server’s SQLCLR (SQL Common Language Runtime) functionality which would allow me to run .NET code within SQL Server. Not letting the fact that I’m not a .NET programmer get in the way, I installed Visual Studio and started converting my Java comparison code into C# which was pretty easy. After deciding that this was the way forward I decided that my aim was to:

  • Create means by which this could be called as a Stored Procedure in SQL.
  • Implement a trigger that used this Stored Procedure as to avoid unsorted data getting into the table, programmatically or manually.
  • Allow incoming parameters to the CLR sproc, including references to additional Stored Procedures which in turn provide the table and columns by which to sort.

I’ve uploaded my rough solution to github: https://github.com/Sheepalot/CLRNatural

SortingTable.sql contains my simple table definition.
NaturalSort.cs contains all of the code.

The following:

[Microsoft.SqlServer.Server.SqlProcedure]
public static void NaturalSort(SqlString midLookupSproc, SqlString wrapperLookupSproc, SqlString sortSproc, SqlInt32 id)

…marks the NaturalSort method as being able to be called as a stored procedure from within SQL Server

The parameters are as follows:

  • midLookupSproc – the sproc called in a binary search fashion, responsible for locating the correct sort value for the subject row. See getMidpointForSort.sql in my project.
  • wrapperLookupSproc – the sproc responsible for providing the row to sort. See selectSortableRow.sql in my project.
  • sortSproc – the sproc responsible for updating the sort value of a required row. See updateSort.sql in my project.
  • id – row id of the value being inserted/updated.

Finally, you can see this all being pulled together in the trigger I created on SortingTable for updates and inserts See tiu_sort.sql in my project.

While passing additional stored procedures into the CLR stored procedure might seem messy, the aim was to allow the code to remain agnostic of the table/columns that are being sorted. In my situation, code residing outside of my web container (on the database) needs to change as infrequently as possible. Doing it like this means simple changes and applying it to other tables and columns can be handled in SQL only.

Automating Excel for Headless, Server-side HTML Conversion

If you’re ever tasked with producing an Excel formatted report from your Java application you might want to hold fire before heading over to the usual down the route of using Apache POI or Aspose. If you’ve got the time to work with these libraries then both provide a great deal of functionality, however, if you’ve already got code to produce HTML (or you’re simply looking to convert existing documents) then you might want to think about another method of getting it into native Excel; use Excel itself!

Excel is really rather good at converting HTML to its native .xls format and through Powershell it’s very easy to automate the Open and Save As operations into something that requires no UI interaction. For example:

$Excel = New-Object -comobject Excel.Application
$format = 56

$Excel.Visible = $False
$Excel.displayalerts=$False

$WorkBook = $Excel.Workbooks.Open($args[0])
$WorkBook.SaveAs($args[1], $format)

$Excel.Quit()

…executed in powershell opens the file that is the first argument and saves it as the second. Excel is told to not be visible and not display any alerts. The format 56 is the id for xlExcel8 (97-2003 format in Excel 2007-2013, xls).

Having saved this powershell script (.ps) we can break out of our JVM (running in Tomcat in my case) to call it on the command line.


String cmd = "cmd /c powershell -ExecutionPolicy RemoteSigned -noprofile -noninteractive C:\\ExcelConversion\\excelconvert.ps1 " + toConvert.getAbsolutePath() + " " + toConvert.getAbsolutePath();
Runtime runtime = Runtime.getRuntime();
Process proc = runtime.exec(cmd);

The above overwrites the existing HTML document with its native xls equivalent.

For obvious reason MS do not recommend using their desktop software for headless server automation but this works well for me even under load and dealing with large HTML documents.

 

Multiple Spring Boot Active Profiles in Tomcat

Recently I’ve been handling the deployment of a Spring Boot (1.3.0) project using Gradle and the Cargo plugin to deploy a war file to Tomcat. With some tinkering in Groovy to create myself a prompt for the Tomcat password and prevent a failed undeploy from failing the entire build, I had everything I wanted.

Each machine I was deploying to had 2 environment variables; spring.profiles.active to define which profile to use (stage, prod etc.) and jasypt.encryptor.password to allow the use of Jasypt encrypted properties. This worked great until we branched for the first time and suddenly I needed two deployments on the same machine within the same Tomcat container with different profiles. At this point I began to struggle.

The “active profiles” property can be set in a number of ways ranging jvm args if you’re using the embedded server, to environment variables and web.xml properties. My initial thought was to set it as a context-specific environment variable in Tomcat/conf/Catalina/localhost/stage.xml e.g.

<context docbase="...">    
 <environment name="spring.profiles.active" value="stage" type="java.lang.String" override="false"/>
</context>

This worked and the deploy got hold of its profile-specific properties like a charm but the problem is that Tomcat deletes these files during undeploy so the file disappeared when the next build went up 😦

Next I decided I’d have to break the “location agnostic war” methodology and include a specific web.xml file per build, e.g. for stage I modified the gradle war task like so:

task stageWar(type: War) {
     baseName = 'stage'
     webXml = file('src/stageWeb.xml')
}

…and created stageWeb.xml as follows:

<web-app>
 <context-param>
  <param-name>spring.profiles.active</param-name>
  <param-value>stage</param-value>
 </context-param>
</web-app>

I was now expecting it to work like just as the context xml modification had previously but without getting removed on every build. Sadly, for some reason, using the web.xml method didn’t work and the deploys defaulted to the base profile 😦

For some reason, the property was only available to the servlet context but not the Spring Boot application so I had to modify our SpringBootServletInitializer to pass it over like so:

@SpringBootApplication
public class Application extends SpringBootServletInitializer {

public String profile = null;

@Override
public void onStartup(ServletContext servletContext) throws ServletException {

//Grab the active profile from the servlet conext
profile = servletContext.getInitParameter("spring.profiles.active");
super.onStartup(servletContext);
}

@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {

//...and pass it to the boot application
application.application().setAdditionalProfiles(profile);
return application.sources(Application.class);
}

public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}

Now every deploy is aware of its correct active profile and I’m happy 🙂

Advertisements