With the option to use SQL's ability to sort multiple columns at once, I was left with one option: do it at run time in
PHP, ideally in a re-usable fashion using Laravel's Collection
class.
So let's start off with a better explanation of what we needed to do. Let's take this super simple data set:
$collection = collect([
[
'name' => 'John Doe',
'city' => 'Dallas',
'state' => 'Texas',
'size' => [
'height' => 72,
'weight' => 210
]
],
[
'name' => 'Jane Doe',
'city' => 'Houston',
'state' => 'Texas',
'size' => [
'height' => 60,
'weight' => 120
]
],
[
'name' => 'Sam Swanson',
'city' => 'Dallas',
'state' => 'Texas',
'size' => [
'height' => 72,
'weight' => 274
]
],
[
'name' => 'Jeremy Simpson',
'city' => 'Dallas',
'state' => 'Texas',
'size' => [
'height' => 71,
'weight' => 210
]
],
[
'name' => 'Lois Smith',
'city' => 'Seattle',
'state' => 'Washington',
'size' => [
'height' => 65,
'weight' => 132
]
],
]);
Please note this is not the actual dataset we were working with, or even the correct keys; it's simply an easy example. As you can see, we have a collection of 5 records representing individuals in 3 cities in 2 states, along with some descriptive data representing their height (in inches) and weight (in lbs). The goal was to sort in this order:
Since we are using the Laravel Collection
class, we do have access to an easy-to-use sorting function. If we want to
sort by state
, all we need to do is this:
$collection->sortBy('state');
The problem comes when you then want to sort by city
. Doing this:
$collection->sortBy('state')->sortBy('city')
Sorts the Collection by state
and then sorts it again by city
. In the end, you end up with a Collection sorted by
city
and nothing else, instead of sorting by state
(so Texas then Washington) and then by city
(so Dallas then
Houston under Texas, and Seattle under Washington). Laravel's Collections do not have a built-in way to sort by multiple
keys, unfortunately, so I had two choices:
The very first thing I did was draw out exactly what had to happen so I could visualize what needed to happen. Some sketches (ie poorly drawn rectangles and brackets) with a physically printed data set led me to the following process:
state
so that all Washington records are together and all Texas records are togetherstate
group, and group again by city
city
data by within each state
group, or Dallas -> Houston, SeattleWith those instructions in mind, I decided the first step was to create the result the long way before making it
something that could easily be used and re-used. My initial thought was "I can just use groupBy()
, sortBy()
,
and flatten()
to easily achieve this, and that was half correct. Sorting a single level (minus making it into a
single level) was simple enough:
$collection->sortBy('state')->groupBy('state');
I learned quickly that the best way to handle this was to swap points 1 and 2 of the plan and sort before grouping as
it allows for a more limited amount of code to be used. Adding a second level was also simple, just by adding a map()
call:
$collection->sortBy('state')->groupBy('state')->map(function (Collection $collection) {
return $collection->sortBy('city')->groupBy('city');
});
With that, I could expand and do all 5 keys:
$collection->sortBy('state')->groupBy('state')->map(function (Collection $collection) {
return $collection->sortBy('city')->groupBy('city')->map(function (Collection $collection) {
return $collection->sortBy('size.height')->groupBy('size.height')->map(function (Collection $collection) {
return $collection->sortBy('size.weight')->groupBy('size.weight')->map(function (Collection $collection) {
return $collection->sortBy('name')->groupBy('name');
});
});
});
});
And, again, it worked. The data was sorted exactly how I wanted it. The only problem was that the data I was sorting
was now 4 levels deeper than I needed it. So I tried my initial thought which was flatten()
, a Collection
method I
had never really worked with. Unfortunately, all that did was create a single record in each state
-group, still nested
4 levels deep. Looking at the
Collections - Available Methods documentation, I noticed
the collapse()
method which promised:
collapses a collection of arrays into a single, flat collection
Perfect! That is exactly what I am looking for! Except after playing with it, the best I could do was to get the very
first record and only that record. I didn't dig too far into it, but it seemed to work much the same way as flatten
does, in that it combines keys or something to that effect. Regardless, I was looking at unacceptable data destruction.
ungroup()
methodAfter digging some more, I realized this simply was not possible using the built-in Laravel methods. I did some googling
and found nothing really relevant and then decided to just build it myself. Initially, I was going to create a static
util function to ungroup a collection. It was messy, but I was sure it would work. Then I remembered something I had
read about in passing in the GitHub issues once: Collection Macros. We were using one or two View Macros already, but
there was no documentation on Collection Macros anywhere. I did some quick Googling and found
spatie/laravel-collection-macros
. Spatie is one of the
biggest contributors to the community, so I figured I could use their code as reference to build my own method. In
the end, I came up with this:
if (!Collection::hasMacro('ungroup')) {
/**
* Ungroup a previously grouped collection (grouped by {@see Collection::groupBy()})
*/
Collection::macro('ungroup', function () {
// create a new collection to use as the collection where the other collections are merged into
$newCollection = Collection::make([]);
// $this is the current collection ungroup() has been called on
// binding $this is common in JS, but this was the first I had run across it in PHP
$this->each(function ($item) use (&$newCollection) {
// use merge to combine the collections
$newCollection = $newCollection->merge($item);
});
return $newCollection;
});
}
All-in-all, an extremely simple method (thanks, in large part, to Laravel's fluent design). And it worked! If I added that to a service provider and combined it with my previous code, I got a single-level collection, all ordered in the correct manner. Here is a quick look at what the final manual implementation was:
$collection->sortBy('state')->groupBy('state')->map(function (Collection $collection) {
return $collection->sortBy('city')->groupBy('city')->map(function (Collection $collection) {
return $collection->sortBy('size.height')->groupBy('size.height')->map(function (Collection $collection) {
return $collection->sortBy('size.weight')->groupBy('size.weight')->map(function (Collection $collection) {
return $collection->sortBy('name')->groupBy('name');
})->ungroup();
})->ungroup();
})->ungroup();
})->ungroup();
With the data successfully sorted, I looked to take that highly repetitive snippet, abstract it into something that will
work recursively, is re-usable, and ditches the repetition. With what I had just created with the ungroup()
Macro, I
believed that creating another Macro was the easiest route. I then took a page out of
Jeffrey Way's book, and wrote how I wanted to execute the Macro, rather than writing the Macro
how I wanted to write it. This has always seemed backwards to me, and even Way has said it doesn't come naturally to
many devs, but it seemed like the easiest method to take.
How I wanted this to work was to simply pass in an array with the keys to sort by in the correct order. Expanding on that, due to an application necessity, I knew I also wanted to define the order I sorted each key in. These requirements led to:
$collection->sortByMulti([
'state' => 'ASC',
'city' => 'ASC',
'size.height' => 'DESC',
'size.weight' => 'DESC',
'name' => 'ASC',
]);
Perfect, now to write the sortByMulti()
macro.
array_reduce()
My first attempt at this involved using array_reduce()
, which would have let me do this without recursion. As I
alluded to above, I was confident this could be handled with a recursive function but I try to avoid them for
optimization and maintenance reasons where possible. So I started writing it and I encountered an issue pretty
quickly: you can't get keys in array_reduce()
, which was necessary if I wanted to defined what order to sort each
key in. With a bit of searching, I found this comment
in the PHP docs that showed a trick for getting the keys of the array. Using that, I was able to get to the point where
I had the data sorted, but 4 levels deep. I couldn't come up with an easy way to implement the ungroup()
macro, since
the return of the array_reduce
is sent to the next iteration and I needed that data to be grouped in that iteration
and not ungrouped until all grouping and sorting had completed. Not seeing an obvious solution with my limited time
allotment and knowing how complicated array_reduce()
can be, I abandoned this attempt.
With array_reduce()
out of the question, I decided to just bite the bullet and write a recursive method to handle this. With this, I ran into a few hiccups:
use()
, it was available in-scope, as expected, but set to null. I then discovered I could pass it in by reference (ie use (&$anonFunc)
), which wasn't pretty but I could live with.array_reduce()
was that I could iterate over one array (my keys) while manipulating another. With a recursive function, I had to find a way to work with those keysMy final solution was this:
if (!Collection::hasMacro('sortByMulti')) {
/**
* An extension of the {@see Collection::sortBy()} method that allows for sorting against as many different
* keys. Uses a combination of {@see Collection::sortBy()} and {@see Collection::groupBy()} to achieve this.
*
* @param array $keys An associative array that uses the key to sort by (which accepts dot separated values,
* as {@see Collection::sortBy()} would) and the value is the order (either ASC or DESC)
*/
Collection::macro('sortByMulti', function (array $keys) {
$currentIndex = 0;
$keys = array_map(function ($key, $sort) {
return ['key' => $key, 'sort' => $sort];
}, array_keys($keys), $keys);
$sortBy = function (Collection $collection) use (&$currentIndex, $keys, &$sortBy) {
if ($currentIndex >= count($keys)) {
return $collection;
}
$key = $keys[$currentIndex]['key'];
$sort = $keys[$currentIndex]['sort'];
$sortFunc = $sort === 'DESC' ? 'sortByDesc' : 'sortBy';
$currentIndex++;
return $collection->$sortFunc($key)->groupBy($key)->map($sortBy)->ungroup();
};
return $sortBy($this);
});
}
So some explanations:
$keys
array and remap it so that each key/value pair was stored in a sub-array. This was
necessary because on lines 20 and 21, I had to access the array by a numeric key which is not possible with an
associative array. This isn't the cleanest code I've written, but it works and I thought it was a clever solution.use (...&$sortby)
sortBy()
or sortByDesc()
. I call that function using a variable
to avoid having to write line 24 twice as part of a conditional.$keys
we're on in line 23. Again, this was something I was displeased with but
I could not find a better solution.In the end, it did its job marvelously. I can now run this:
$collection->sortByMulti([
'state' => 'ASC',
'city' => 'ASC',
'size.height' => 'DESC',
'size.weight' => 'DESC',
'name' => 'ASC',
]);
and get these results:
[
[
"name" => "Sam Swanson",
"city" => "Dallas",
"state" => "Texas",
"size" => [
"height" => 72,
"weight" => 274,
],
],
[
"name" => "John Doe",
"city" => "Dallas",
"state" => "Texas",
"size" => [
"height" => 72,
"weight" => 210,
],
],
[
"name" => "Jeremy Simpson",
"city" => "Dallas",
"state" => "Texas",
"size" => [
"height" => 71,
"weight" => 210,
],
],
[
"name" => "Jane Doe",
"city" => "Houston",
"state" => "Texas",
"size" => [
"height" => 60,
"weight" => 120,
],
],
[
"name" => "Lois Smith",
"city" => "Seattle",
"state" => "Washington",
"size" => [
"height" => 65,
"weight" => 132,
],
],
];
As mentioned in the beginning of this post, I had about an hour, hour and a half to write this. As a result, I did take some shortcuts and this can definitely be optimized:
sortBy()
and groupBy()
.
ungroupBy()
also creates a possibly unnecessary collection. Unfortunately, there isn't a much better option available
short of writing it using the native array methods, which would have been slightly more performant. The biggest issue
my method risks is memory usage and perhaps memory leaks. My use case is only seeing me sort a collection of about 200
records at a time, so I wasn't overly concerned with this.$keys
argument of the sortByMulti()
method requires the ordering be passed in with each key. This is a bit
of a pain and I would have liked for you to only have to declare keys that are sorted in DESC, but I felt it was an all
right compromise to keep it simple and limit time spent.array_reduce
method I attempted, but the only
way I can think of to use the ungroup()
method would be a separate call after the reduction that would likely have
to be recursive as well.$key
and $sort
variables).All in all, this was a fairly simple solution to a problem I've skirted around solving many times now and only took a little over an hour to accomplish, and I am fairly satisfied with the overall result.
]]>I was going to leave this be after everything was resolved, but it seems I am far from alone ([1] [2] [3] [4] [5] [6]) and I want to add my voice to the pile and hopefully force Motorola to fix their practices.
On March 2, 2016, I experienced a failure in my phone. The phone restarted randomly without user interaction and when it turned back on, it wouldn't make it beyond the boot screen before the screen would go black. The screen would stay on indefinitely and occasionally the status bar would flash at the top of the screen. The phone itself heated to an extreme level within a few minutes, enough that it hurt to touch.
After work, I tried some manual debug work. Using the various tools in the Android SDK, I went into safe mode and experienced the same issues. I rebooted the device's bootloader to the same effect. I tried to charge the phone while it was turned off as well but that failed with a battery icon that had an exclamation mark in it. As a last ditch effort, I wiped the device. This allowed me to make it beyond the boot screen and initiate the Android setup, but Wi-Fi no longer functioned and I still was unable to charge the device.
Admitting defeat, I submitted a repair request to Motorola, was given a FedEx sticker, and a promise that the phone would be returned to me 5 business days after receipt.
The following day, I dropped the phone off at a FedEx store. The employee in the store was extremely helpful and packed it very well. FedEx tracking showed it shipped that day.
The phone arrives at Motorola's facility at 12AM
Motorola emails me a confirmation of the delivery at 4PM. That seemed very slow to me. That was a day and a half between receipt and confirmation. I've worked in a warehouse before and that turnaround time from delivery to informing the employee their package had arrived would have been absolutely unacceptable.
Two days later, I go to the Motorola website and try to find the status of my repair. The tool is either completely broken or Motorola neglected to enter my repair into their system. Worried that something had been lost in the system, I contact their online support. In the form for starting that chat, it asks for what you want to contact them about and one of the options is "Repair Support and Status"
My conversation does not go as hoped. I told them their online tool was not working and I wanted to know the status. Instead of helping, they simply say that this was a technical support chat and they do not handle repair support here, despite the site indicating they do. I was told to call phone support instead.
I decided to wait until the five business days was done before contacting support again, figuring I should give them a fair chance. On the fifth day, I called the phone support. I was told by this agent that my device was being worked on and would be shipped out on March 17, which they said was 5 business days after receipt. I would receive an email on that day and I was instructed not to call back until then.
Thursday came and went and I heard nothing from Motorola. On Saturday, I called again. This time, I was told my phone was being replaced, not repaired. This was the first I had heard of this in two and a half weeks. The agent said they were out of stock of my model but they would ship out a device by Wednesday or Thursday of the following week. So I waited some more.
I'd like to note that my specific device was available for sale on these days from Motorola and Amazon. I have not seen the device out-of-stock since I sent the phone in. I checked several times a week during this period of time.
After a further week and a half without any contact at all, and the Repair Status Tracker still not working, I called phone support again. The anget picked up the phone and I could hear them breathing on the other end and I could hear the other people in the call center too. I said "Hello?" several times and after about 30 seconds of this, they hung up on me.
I called back immediately and this time was on the phone with a woman with an extremely heavy Indian accent. All other contact had been with an Indian call center as well, but they had been mostly understandable. Several times, I had to have her repeat what she was saying and the only thing of value she said was "a representative from the repair center will call you tomorrow."
This should come as no surprise, but no one from Motorola contacted me.
I contacted online support fearing I would not be able to control my frustration and anger on the phone. Again, they try to push me off saying they do not handle repair support and status. This time I pressed them. I demanded answers. I complained that I had gone nearly a month without a phone and was on business day 16, three times the estimated amount. After repeatedly trying to get me to leave, the agent finally said I would be contacted the following day by a representative from the repair facility.
At this point, I posted a few tweets out of frustration, messaged Motorola on Twitter, and made a Facebook post about it.
Motorola's service department is bad enough to make me never get another phone from them again
— Josh Janusch (@Apocalyptic0n3) March 30, 2016
I sent my Moto X Pure into Motorola for a battery replacement a freaking month ago. Cannot even get an update out of them. No more Motorola
— Josh Janusch (@Apocalyptic0n3) March 30, 2016
@Moto_Support messaged me and told me to email the details of my repair to mailto:supportforums@motorola.com and they would help me out. This was the first bit of reaching out that Motorola did in a month. I sent an email immediately.
I receive an email back from that address saying they are looking into it and would contact me soon with more details.
After another 24 hours with no reply, I emailed them back and asked for an update. They responded a few hours later and said a phone would be shipped out in the next few days.
Later that night, I received an automated email from Motorola and another from FedEx informing me my new phone had shipped.
After more than a month without my phone, I finally received the replacement.
From what I have read online, this interaction with Motorola and the complete disregard for their customers is commonplace at the moment. I love my Moto X Pure. I loved my second generation Moto X. But I will never purchase another device from Motorola/Lenovo after this. They have a severe problem that they need to fix.
]]>morphMap
. The Morph Map is an
extension of the polymorphic relationships and an effort to make them easier to use. If you used them in pre-5.1, you'd
see a record like this:
[
"id" => 1,
"comment" => "Hello world",
"commentable_type" => "App\Models\Post",
"commentable_id" => 5
]
The commentable_type
included the full namespace for the associated record type, which had a few downsides:
App\Models\Customers\Projects\Bids\BidInvitation
) means that you have to set a long max
length on the column5.1 thankfully addressed this via pull request #9891, which added the
Relation::morphMap()
method. That method defines a mapping of names to class paths globally. You do this with:
Relation::morphMap([
'post' => App\Models\Post::class,
'video' => App\Models\Video::class,
'user' => App\Models\User::class,
]);
That will allow you to use post
instead of App\Models\Post
in your database and will automatically map it when
setting up the relationships within your Eloquent model. The best place to define that is in a service provider; I
recently called it in AppServiceProvider
. So now, you would instead add this record
[
"id" => 1,
"comment" => "Hello world",
"commentable_type" => "post",
"commentable_id" => 5
]
and it would behave in the exact same manner as the previous example does. This feature is completely undocumented right now with no clear indication why. I highly recommend using it and avoid writing your own implementation as we were forced to do on a 5.0 project.
]]>Over the last few weeks, I've wanted to build a larger project in Laravel outside of my job. After a lot of deliberation, I decided to go with a blog platform. The reasoning for that was simple:
This will be something I work on in my free time, hopefully a few hours a week after the initial build over my Thanksgiving vacation. I will attempt to write about all decisions I have to make as well as general architecture and functionality I am building.
My initial goal for this phase is just the upper-level framework. The blog needs to be able to:
Nothing crazy for the first phase of the build, just the groundwork. As I mentioned, however, I want to be adding constantly adding new functionality. To this end, my further-reaching goals include:
LIKE
searchesThat's just a quick overview of my intentions. It would also be nice to, at some point, add theme and plugin support, although that is very low on my list of priorities.
I decided to build it in Laravel 5.1 on PHP 7 using Twig, Bootstrap, jQuery, Scss, Gulp, MySQL, and Homestead.
For the backend, I'd like to keep it up to date with Laravel updates as they are released, so I expect an update to 5.2 in the next month. I'm going with PHP 7 as a way to expose myself to it as well as get the improved performances. I'll use MySQL as the database for familiarity, although using Eloquent means I can easily switch to PostgreSQL or something else if I wanted to. I'm undecided what web server software I'll use at this time, though I suspect I'll stick to my usual Debian 8 with Apache 2.
For templating, I will use Twig instead of Blade. The reasoning behind this warrants an entire post, but suffice to say that it I do not agree with a few of the things that Blade can do and I like how strict and how little functionality Twig actually includes.
For front-end, I will use Bootstrap and jQuery. I know both well and they will do their job well enough. I had initially hoped to do this using Bootstrap 4, but it has been three months since the first alpha and there are glaring bugs in that alpha that make using it impossible right now. Hopefully, it will be possible to upgrade to 4 at some point in the near future.
I will also be using Sass (specifically Scss) for all of my custom styles and Gulp to compile it as well as the any Javascript classes I write. At this moment, I don't intend to use CoffeeScript or TypeScript or anything of that nature, but it could happen at a later time.
The other decision I made was that the post editor itself would be Markdown-based, perhaps with an adapter-based structure so that something like CKEditor could be used in its place at some other time. As I mentioned in Two Weeks With Ghost, I really enjoy using Markdown and I cannot fathom building a blog that does not utilize it.
I'm using Homestead for my local server just so I can be exposed to it. We do not use it at work and instead use a Vagrant box that matches our production environments. I want to use Homestead here just so I know how it works and what it does differently from our standard setup.
The project itself is unnamed at the moment, but I am leaning toward using Slip or Slipstream.
I've created the very basic database structure as well as the models for Posts, Tags, Categories, Users, etc. and controllers for Post and User management. Next up is basic route creation
My plans are to work on this when I have some downtime at home, so progress may be slow. I will post new blog entries as progress is made. When the blog is in a working state, I'll release it on my GitHub.
]]>This is how pagination is handled in Redmine:
<p class="pagination">
<span class="current page">1</span>
<a href="/projects/redmine/issues?page=2" class="page">2</a>
<a href="/projects/redmine/issues?page=3" class="page">3</a>
<span class="spacer">...</span>
<a href="/projects/redmine/issues?page=181" class="page">181</a>
<a href="/projects/redmine/issues?page=2" class="next">Next ยป</a>
<span class="items">(1-25/4504)</span>
<span class="per-page">Per page:
<span>25</span>,
<a href="/projects/redmine/issues?per_page=50">50</a>,
<a href="/projects/redmine/issues?per_page=100">100</a>
</span>
</p>
Three distinct elements all in the same container with no clear designation of what item is what. The sidebar has the same problem:
<div id="sidebar">
<h3>Issues</h3>
<ul>
<li>
<a href="/projects/redmine/issues?set_filter=1">View all issues</a>
</li>
<li>
<a href="/projects/redmine/issues/report">Summary</a>
</li>
</ul>
<h3>Custom queries</h3>
<ul class="queries">
<li>
<a href="/projects/redmine/issues?query_id=84" class="query">Documentation issues</a>
</li>
<li>
<a href="/projects/redmine/issues?query_id=1" class="query">Open defects</a>
</li>
<li>
<a href="/projects/redmine/issues?query_id=2" class="query">Open features</a>
</li>
<li>
<a href="/projects/redmine/issues?query_id=931" class="query">Patch queue</a>
</li>
<li>
<a href="/projects/redmine/issues?query_id=42" class="query">Plugin issues</a>
</li>
<li>
<a href="/projects/redmine/issues?query_id=7" class="query">Translation patches</a>
</li>
</ul>
</div>
Those are just two sections in a sidebar. If you add other sections through plugins, it just adds more items to that container in the same manner, although at least here there are lists half-separating content.
All-in-all, Redmine is a very tough thing to theme and, often, to fix it you have to rework the structure through Javascript to make it doable, which is a terrible habit to get into.
]]>I first heard about Ghost in May 2013, I believe through a tweet by then-editor-in-chief of The Verge, Josh Topolsky. I was immediately intrigued by the promises of its Kickstarter campaign and contributed $20 to it. I even got my name on the Launch blog post! Woo!
The premise was great. None of the CMS-centric extras of Wordpress, a focus on performance and the blog, and a myriad of cool design and features. It seemed like the future of blogging and I was excited for it.
An image from their Kickstarter, an image that made me back them.
Two years after its initial launch, I decided to start this blog in an effort to voice things I encounter in my work each day. I chose Ghost with no hesitation. Setting it up was... mostly painless. I'm not super familiar with Node.js, so getting it to run without having to babysit it was an experience I wish I hadn't had to go through (just use nodejitsu/forever for the record). Finding a decent theme was difficult and configuring the theme even more troublesome. From the very installation, it was clear to me this wasn't the promised one and that my dream platform wasn't as great as I had hoped for.
Before I get into the negatives, I do want to say that Ghost has plenty of great things going for it.
For some inexplicable reason, I absolutely love Markdown. I use it daily at work with Quiver and for GitLab. For me, it has become a natural syntax to work in and I have no interest in using a WYSIWYG editor. Ghost's Markdown editor is fantastic and really appealed to me when I started using it.
The Ghost Markdown Editor
The side-by-side editor works wonderfully and while it would have been nice for the preview to scroll with the actual editor, I don't think I could have asked for a more ideal publishing tool.
One of my first appeals with Ghost was its promise of speed, apparently a hallmark for Node.js apps. Google developers frequently tout the "1000ms" mark as the absolute max for a page to load and even after setting up my blog through Apache and mod_proxy, plus a light theme, Ghost was coming in at <800ms. By comparison, this Wordpress blog is loading at 2000ms, something I need to work on reducing.
Ghost is very, very simple, especially in its current 0.7.1 state. You can theme it, you can write posts, you can manage some metadata, and you can add additional users to the blog. That's it. This was initially a great appeal to me and it made for a simple experience blogging.
Ghost currently has no support for plugins. It's in the works and I imagine it will be in the 1.0 release, but it's not there yet. There was no way to easily add comments or analytics or special page types or, well, anything. Ghost is what it is right now and there's not much anyone can do with that.
Themes are there and there are quite a few available and, thankfully, they're even extremely easy to make. But there are a laundry list of problems with them.
wp-admin/theme-install.php
and have instant access to thousands of free themes and Ghost is really at a disadvantageGhost updates are currently very poor. To update, you stop your Ghost instance, download the update, copy your old content folder and config files, run npm install
(which would update assets and run migrations), and then restart it. This is not something I believe a non-developer would be able to do and you for sure wouldn't be able to do it without SSH access (something that might not be available to you for shared hosting).
Themes are even more difficult. Again, there is no built-in mechanism for updates and with the way you have to edit themes to make them usable, something like a simple git pull
doesn't really work.
Those screenshots and that amazing intro video from the Kickstarter don't exist in Ghost, at least not yet. Perhaps some day, but not now.
As I alluded to earlier, installation is not as easy as other systems. Ghost's instructions are simple to follow, but without access to the command line, it would be impossible to install. It requires npm, which most servers do not come pre-installed with (compared to PHP which most do). It requires running it via npm, which is another thing that you need command line for. Modifying any part of the install requires modifying files. It's simply painful.
I went with Wordpress mostly for the support. It is often updated, there is a large community that is making useful plugins and beautiful themes, and there are a ton of resources out there for making it behave the way I want it to. After figuring out using a Markdown editor and syntax highlighting, it seemed like a good fit.
Ghost is a young platform. It's initial release was only two years ago and we still have not seen a 1.0 release. It's promising, it's open source and well liked by the open source community, and it has a vision. I fully expect it to be one hell of a blogging platform someday. But that day isn't today, unfortunately. It's not ready for the prime time, and for that reason I have switched to Wordpress.
]]>For a quick background, when I started in June 2013, the team was using a 3 year-old install of Bugzilla on an internal server where entire projects would often be handled in a single ticket. It was a mess.
In November 2014, after considerable headaches, one of our project managers pushed us over to vTiger, a CRM software that allowed us to combine sales, client interaction, design, development, Q&A, and SEO into a single place. It seemed perfect, until we actually started to heavily use it. We discovered how lackluster the ticketing system was (which had absolutely no focus on proper development practices), how simple the functionality was, how poor email notifications were, and how insufficient the UI was. In short, it was a disaster that we realized within a month was not going to work.
I began searching for something better in December. I started with JIRA initially because I had had experience with it at a previous job and, well, it is the king of the hill in this market, afterall. JIRA, however, wasn't right and I have many friends at other agencies who have discovered this same fact. It's too focused for an agency. It's meant for a company with a few projects; a software company developing one or two products and little else. There's no proper way to organize everything under a client (we have clients that have as many as four projects with us), the project codes are restrictive and unhelpful, and cross-project management can be difficult.
I spent the next two months looking at the rest of the market, which is surprisingly full. First, I considered and briefly used our GitLab installation which works as a bug tracker but as little else. We toyed with Redmine but were disappointed with the UI. I spent weeks looking at AlternativeTo for new options.
Some, like Trello, Asana, Basecamp, and any number of other "Agile" project managers are little more than complex to-do lists and have no place being used in a development team workflow. Some, like Taiga, are very new and very promising but are not nearly developed enough to be a legitimate contender. Others still, such as Redmine, Bugzilla, Fogbugz, and Trac are either too focused or too plain. We even came very close to using OpenProject, which is a Redmine fork with a better UI but that, too, was young and missing much community support.
We did. Despite having the least welcoming UI and some of the strangest organization choices, we settled on Redmine and it has worked beautifully.
First, we had to improve the interface, which is outdated and tough to look at. If we wanted the entire team (including PMs, Designers, and SEO) using this, it had to be something they wanted to use and didn't despise looking at. Thankfully, Redmine has a large database of themes and a decent community to support them.
The beauty that is the default Redmine theme
After testing 15 or so themes, I came upon Minelab by Hardpixel (which I have since forked and maintained for 3.x at jjanusch/minelab). It was a modern, improved UX, and was honestly just easy and enjoyable to use.
A preview of Minelab
With a bit of work on my part, we had it up and running and ready to be used but first we had to work out how to use it. To this end, we decided that all top-level projects would represent our clients. Then their projects would exist as sub-projects under them.
Redmine projects work in way that sub-projects bubble up through their parents. So with the structure above, it gave us a way to track all issues and have a centralized Calendar and Gantt chart for each client, as well as total hours. It was convenient and this structure gave a clear hierarchy in our Redmine of what belongs where and for assigning membership as well.
We were able to implement this system and improve actual project management (separation of tasks, dividing of work among team members, communication, etc.) to make Redmine work extremely well. At this point, I cannot imagine working without it.
Redmine is not without its faults. The largest, for me as its administrator anyway, is that it is written in Ruby using Rails. This isn't really a bad thing, it's just that we generally use PHP at VURIA. No one on the team knows Ruby beyond some minor work we've done for Vagrant and GitLab. This has made it a bit difficult to make modifications when we've needed to, such as adding items to the menu bar.
Another issue is the way comments are handled, which is a foolish UX decision that has haunted Redmine since the beginning. There is no quick comment. You have to edit an Issue and add a "Note". This is a very unintuitive UX choice and difficult for new users to figure out. Fortunately, this looks to be a candidate for Redmine 4.0.
The Projects list, which is just a flat list, is also largely useless in our situation. We currently have more than 250 projects in the system and that page is 10,000px tall. It's difficult to use and we generally skip it entirely. Again, though, this is a candidate for 4.0.
I spent nearly 2 years fighting against our project management software situation. I researched every available option I could find and, as an agency anyway, Redmine was the best choice. After about five months of use, it looks like this will be the final solution and will not have to worry about this further. I hope to do a follow up in a few months to further address our actual usage after we have had more time with it.
All in all, Redmine has turned out to be nearly exactly what VURIA needed and I can readily recommend it for other agencies facing this same problem.
]]>Let's start off with the problem itself. The project itself was a large report creator that started as a way to generate
dynamic reports based on a few dozen fields that the users had full control over. To do this, we utilized
Twig templates using a recursive
include
method. We then fed this to wkhtmltopdf to convert the HTML to PDFs, sometimes
30-40 pages long.
The next phase of the project switched gears. We forked the editor, added dozens of new record types and input methods, and instead of generating PDFs, we had to fill out several PDFs filled with inputs. With this phase, we had to account for users entering far more text than fits into the PDF input and moving the overflow into supplemental pages.
This was far more trouble than we had bargained for.
My first thought was somehow connecting the inputs of the PDF so that text automatically flows between them. This is
possible through embedding Javascript in the PDF. Unfortunately, we could not modify the source documents and this
wouldn't really work programmatically since it relies on keyup
, or similar, events.
The next possibility was finding a tool that would take an input and say if it was too much. That would have helped, but it wouldn't help in moving the overflow to supplemental pages.
Another option was to measure each character width and manually measure it all. This might have worked to an extent. It would have been a ton of work (the client is creating reports in dozens of languages and the character set would be huge), but it showed promise. After investigation, however, we realized that kerning, line heights, line breaks, field padding, et al. would play into it. We determined it was highly unlikely we could handle all situations or do this method efficiently, so we moved on.
The final option was to calculate how large a set of text is and figure out if it fits and where it breaks. This seemed like the most fool-proof way so we went with it.
This solution proved difficult. Few PHP libraries or command line tools have the ability to measure the height of text. We were already using wkhtmltopdf and PDFtk, so we first tried those. They were a no-go.
Next, we looked at PDFlib, a native PHP extension. This allowed me to half-accomplish the
goal in a round-about way. I was able to create a temporary PDF that would let me set dimensions of a text field
(called a textflow
), put text into it, and move any remaining text into a second field. It worked, and it was fast.
A 10 page overflow could be done in milliseconds. My prototype for this is no longer available, but it was based on
the code found in PDFlib's documentation and
was based around this:
do {
/* Fill the first column */
$result = $p->fit_textflow($tf, $llx1, $lly1, $urx1, $ury1, $optlist);
/* Fill the second column if we have more text*/
if ($result != "_stop") {
$result = $p->fit_textflow($tf,
$llx2, $lly2, $urx2, $ury2, $optlist);
}
} while ($result == "_boxfull" || $result == "_nextpage");
Sadly, this was a one-way street. You could input text, but there was no way to output the text in a specific
textflow
. So there was no way to figure out which text needed to be added to supplemental pages or even where to
break the first field, just that it wasn't going to fit.
With PDFlib out, I reluctantly turned toward PHP libraries. I had previously had some experience with DOMpdf, mPDF, fPDF, and a few others. They were extremely slow and it worried me that this was what was left to me without resorting to an API written in Java or something else.
We went back through those and none of them had the functionality we were looking for, or didn't have it implemented
well enough to be usable. We finally found TCPDF, which included a function which did exactly
what we needed, getStringHeight()
.
getStringHeight()
takes a string, width, font formatting, and padding and outputs the height of the string in those
constraints. We could then compare that to the height of the field and know if there was overflow. We finally had it.
The first prototype went word-by-word through the input until it went over the height. It worked and it worked well... if you had 10 minutes to handle each field. It was extremely slow. The basic logic behind this:
$paddings = ['T' => 0, 'R' => 2.835, 'B' => 0, 'L' => 2.835];
function getOverflow(\TCPDF $pdf, $string, $fieldWidth, $fieldHeight) {
// replace line breaks with PHP_EOL. Shouldn't be necessary but it didn't work correctly otherwise
$words = preg_replace('/(?<! )\\n(?! )/', sprintf(' %s ', PHP_EOL), $string);
$words = array_filter(explode(' ', $string));
$i = 0;
$testString = null;
$wordCount = count($words);
while (!$testString && $i < $wordCount) {
$testString = array_implode(' ', array_slice($words, 0, $i));
if ($pdf->getStringHeight($fieldWidth, $testString, false, false, $paddings) > $fieldHeight) {
break;
}
}
return ['text' => array_implode(' ', array_slice($words, 0, $i - 1)), 'overflow' => array_implode(' ', array_slice($words, $i))];
}
It's relatively simple. Split the string on spaces, loop through that string, compare the height of the string to the field height. It worked, albeit very slow. But as a proof-of-concept, it did its job and proved that this problem was solvable.
Now that it worked, it needed to be optimized. After some tests, it became clear that getting it to an acceptable level was unrealistic. We resigned ourselves to offloading the generation to jobs run in the background that would then be sent to the user. It wasn't ideal, but it was acceptable given the functionality it provided.
So the first plan was to start with a simple one-way binary search. Start at the full string, halve it until its shorter than the field, and then run the previous function starting at that word.
$paddings = ['T' => 0, 'R' => 2.835, 'B' => 0, 'L' => 2.835];
function getOverflow(\TCPDF $pdf, $string, $fieldWidth, $fieldHeight) {
// replace line breaks with PHP_EOL. Shouldn't be necessary but it didn't work correctly otherwise
$words = preg_replace('/(?<! )\\n(?! )/', sprintf(' %s ', PHP_EOL), $string);
$words = array_filter(explode(' ', $string));
$testString = $string;
$lastTextHeight = 0;
$comparisonCheck = true;
$i = $wordCount = count($words);
while ($pdf->getStringHeight($fieldWidth, $testString, false, false, $paddings) > $fieldHeight && $i > 0) {
$i = $wordCount / 2;
$testString = array_implode(' ', array_slice($words, 0, $i));
}
while (!$testString && $i < $wordCount) {
$testString = array_implode(' ', array_slice($words, 0, $i));
if ($pdf->getStringHeight($fieldWidth, $testString, false, false, $paddings) > $fieldHeight) {
break;
}
}
return ['text' => array_implode(' ', array_slice($words, 0, $i - 1)), 'overflow' => array_implode(' ', array_slice($words, $i))];
}
This improved things dramatically. What previously took minutes was now taking ~45s. That was only for a single field, however, and was still unacceptable โ we were hoping to do an entire document (with dozens of fields) in an average of 10s.
Next, we had to look into ways to improve that. We took the binary search a step further and made it go forward until the string is taller than the field and then go in reverse word-by-word. This was another improvement, decreasing that average by another 10s.
We realized here that we had to automate the binary search. Go back and forth until we're within a few words, and then go into word-by-word mode. That should greatly reduce iterations.
Unfortunately, I can't show full code samples beyond this point because it's what was actually used in the application, but I can give a general overview.
protected function operateWhile(array $words, $startingIndex, $fieldWidth, $fieldHeight, $operator, $operatorVal, $comparison) {}
The above method is what the getOverflow()
method grew into. It runs a search in a specified direction using a
specified interval. For example, the previous function could be run with:
$idx = $this->operateWhile($words, count($words), $fieldWidth, $fieldHeight, '/', 2, '>=');
$idx = $this->operateWhile($words, $idx, $fieldWidth, $fieldHeight, '+', 1, '<=');
Further testing came up with things like
$idx = $this->operateWhile($words, count($words), $fieldWidth, $fieldHeight, '/', 2, '>=');
$idx = $this->operateWhile($words, count($words), $fieldWidth, $fieldHeight, '*', .5, '<=');
$idx = $this->operateWhile($words, count($words), $fieldWidth, $fieldHeight, '/', 2, '>=');
$idx = $this->operateWhile($words, count($words), $fieldWidth, $fieldHeight, '*', .5, '<=');
$idx = $this->operateWhile($words, count($words), $fieldWidth, $fieldHeight, '/', 2, '>=');
$idx = $this->operateWhile($words, $idx, $fieldWidth, $fieldHeight, '+', 1, '<=');
For the most part, the more iterations of the binary search, the quicker things ran. The problem was that the longer the string, the larger the gap the one-by-one had to bridge. We needed a way to dynamically run the search based on string length and field size. To that end, we developed this:
const OVERFLOW_INCREMENT_MULTIPLIER = .25;
const OVERFLOW_INCREMENT_MIN = 7;
$breakIndex = floor($this->operateWhile($words, count($words), $fieldWidth, $fieldHeight, '/', 2, '>=')) * self::OVERFLOW_INCREMENT_MULTIPLIER;
while ($incrementer >= self::OVERFLOW_INCREMENT_MIN) {
$breakIndex = $this->operateWhile($words, $breakIndex, $fieldWidth, $fieldHeight, '+', $incrementer, '<');
$breakIndex = $this->operateWhile($words, $breakIndex, $fieldWidth, $fieldHeight, '-', floor($incrementer / 3), '>');
$incrementer = floor($incrementer * self::OVERFLOW_INCREMENT_MULTIPLIER);
}
$breakIndex = $this->operateWhile($words, $breakIndex, $fieldWidth, $fieldHeight, '+', 1, '<=');
That's roughly the final approach, minus some additional logic to handle it running too long or breaking when the string is too short. The two constants dictate how the search is run and for how long.
That brought the final duration down to our acceptable mark of ~10s on average. It's still not ideal, but it works and the UX is acceptable. If a string flows over the dimensions, it is truncated and the remainder is moved onto another page.
Note: No actual code samples were used due to contractual requirements, although original code samples were referenced to create the ones in this post.
]]>