# Planet Primates

## August 21, 2014

### StackOverflow

#### Transform a map into a new map based on the pattern of keys in Scala

Given a map to find elements of a pattern of C{NUMBER} -> STRING; this is my code to do that.

val pattern = "C([0-9]+)".r
// find the elements C[0-9]+ format
val plots = smap filter { x =>
x._1 match {
case pattern(r) => true
case  _ => false
}
}


I need to extract the elements with the pattern, but to create a new map of Map[Int, String]. For example:

Map[String, String]("C1"->"a", "B", "C2"->"c") => Map[Int](1 -> "a", 2 -> "c")


How can it implemented in Scala?

#### It seems that it is not possible to do POST with body >10K with scala play ws framework

It seems that it is not possible to do POST with body >10K

If I do:

WS.url(url).post("content more than 10K")


I've got clipped body. Exactly 10K. How can I avoid this limitation?

#### What does %-mark mean in Clojure?

I've tried to find the answer but it's quite difficult to search just the %-mark. So I've seen %-mark sometimes but I can't understand what is its function. It would be very nice if somebody could tell the explanation.

### CompsciOverflow

#### How to conduct time complexity analysis for an implemented algorithm

In my bachelor degree's thesis I've developed an algorithm for recommender systems which uses personalized PageRank with some particular features as nodes. In the recommender systems' field, there is the opportunity to understand how good is the algorithm using some accuracy metrics (MAE, RMSE, F-measure, ecc.).

What I want to do

In my case I don't want to limit my analysis on the accuracy field, but I want to extend my work with a proper discussion which will compare the amout of time needed by each of the algorithm that I've implemented. During my degree I've never done something like that so I don't know how to conduct it in a formal and proper way.

The personalized PageRank implementation that I use is already present in a library (java JUNG library), so I don't need to analyze it. Instead I want to compare the different ways of using this algorithm that is the object of my thesis.

What I've thought

Actually I'm calculating (using some Java methods) the time needed by each algorithm in order to complete the specific task. After that, I will draw a plot in which I describe how much you need to pay in time in order to get a specific accuracy level.

Questions

• Is there some good work from which I can take inspiration from (papers, books, ecc.)?
• Can you give me some tips or simply tell your experience in the field in order to conduct this kind of analysis in a proper way?

If there is something not clear, please leave me a comment and I'll improve my question. Thank you in advance

### StackOverflow

#### Why do you need to create these json read/write when in java you didn't have to?

Please correct me if I am wrong, but when using Java with say spring mvc you didn't have to create these extra classes to map your Java class to json and json => class.

Why do you have to do this in Play with scala? Is it something to do with scala?

case class Location(lat: Double, long: Double)

implicit val locationWrites: Writes[Location] = (
(JsPath \ "lat").write[Double] and
(JsPath \ "long").write[Double]
)(unlift(Location.unapply))

)(Location.apply _)


### QuantOverflow

#### How to classify stocks by their volatility?

I would like to hear other possible ways of classifying Stocks by the Volatility of their returns. Assuming that I want to characterize each stock as Low, Medium or High Volatility Stock and assuming that I know the Annualized Volatility for each of the stocks in my sample, what ways are there to do such classification? I can think of two:

• Below, say, 30th percentile (of the Annualized Volatilities) -> Low Volatility; Between 30th and 70th Percentile -> Medium Volatility; Top 30th percentile -> High Volatility
• (-2)*Std.Dev (of the distribution of the Annualized Volatilities) -> Low Volatility; Between (+-2)*Std.Dev -> Medium Volatility; (+2)*Std.Dev -> High Volatility

Feel free to point out papers where I can find my answer.

### /r/compilers

#### Lexer rule for numerics

I was recently having a discussion with someone about the rules for how numerics should be defined in a toy language we're trying to design. The argument was thus: Should a number terminate upon hitting an alphabetic character or not. E.G. should the string 15sdfg emit two tokens (15 and sdfg and then let the parser determine that its an error in syntax) or should it throw an error because that is not a valid form of number? I used rust and C as two languages to look at for inspiration.

For Rust you get: error: expected ; but found sdfg for both "let a = 15sdfg;" and "let a = 15 sdfg;". This on the other hand leads me to believe it generates two tokens (one for the number 15, and one for an identifier sdfg. Then the parser will determine that the sequence of tokens is wrong, but it can't give you different errors because both look the same to the parser.

In C "int a = 15sdfg;", will give the error: invalid suffix "sdfg" on integer constant. but "int a = 15 sdfg;" gives: expected ‘,’ or ‘;’ before ‘asdf’. As I said before, the parser wouldnt be able to tell the difference between the two cases if two tokens were generated when there is no space (e.g. 15sdfg), so I assume that it throws an error then and there, or generates a single token.

I want to say that given that you can differentiate between two different types of error, the C way is better. Also in general it just seems like more of a lexical issue in the first place, and doesn't really have anything to do with syntax. But my co-worker argues otherwise.

Anyone who knows one way or another about the above two examples, feel free to chime in, because I'm just guessing. But really I'm wondering what the better way to do it is, or if it is really all that big of a deal at all.

submitted by DanCardin

### StackOverflow

#### lein test (:numbers) example

From

 lein help  test


,,

(deftest ^:integration network-heavy-test
(is (= [1 2 3] (:numbers (network-operation)))))


What is

  (:numbers (network-operation)


doing here?

I added the network-operation function and understand network-heavy-test2 (and it as expected passes.

I assume that (:numbers ..) or :numbers needs to be added / defined / called somewhere?

network-heavy-test fails with

FAIL in (network-heavy-test1) (core_test.clj:23)
expected: (= [1 2 3] (:numbers (network-operation)))
actual: (not (= [1 2 3] nil))


....

(defn network-operation [] [1 2 3])

(deftest ^:integration network-heavy-test2
(is (= [1 2 3] (network-operation))))

(deftest ^:integration network-heavy-test
(is (= [1 2 3] (:numbers (network-operation)))))


### CompsciOverflow

#### Difference between weak and strong AI

I'm trying to understand the difference between weak and strong AI. For an example, let's say we would pass the turing test - would it show strong AI or weak AI then?

I don't believe that this is standard terminology, but more philosophical. It was mentioned by John Searle in his "Chinese room argument". As I understand, strong AI is about computers really being intelligent such as having a mind and thus a conciousness, and weak AI refers more to computers being able to simulate the behaviour of human intelligence on only specific problems (think chess, etc.)

Now, the question is - if we would be able to pass the turing test, would it be called weak or strong AI then? Could it be strong AI due to the fact that the turing test is not limited to a certain area or a specific problem?

I came across it on wikipedia: http://en.wikipedia.org/wiki/Chinese_room

#### Learning to program in C

I have about a month to become proficient at programming in C. I wonder if anybody could recommend some worksheets/exercises that get progressively harder so I can practice and learn.

Many thanks!

### StackOverflow

#### two clojure map refs as elements of each other

I have two maps in refs and want to assoc them to each other in one transaction.

My function looks like this:

(defn assoc-two
[one two]
(let [newone (assoc @one :two two)
newtwo (assoc @two :one one)]
(ref-set one newone)
(ref-set two newtwo)))


Now i am calling assoc-two like this:

(dosync (assoc-two (ref {}) (ref {})))


Im getting and StackOverflowError at this point.

I also tried this:

(defn alter-two
[one two]
(alter one assoc :two two)
(alter two assoc :one one))


Is there away to do this in way that one has an entry referencing two and vice versa and still being in one transaction?

### CompsciOverflow

#### NP-hardness of an optimization problem with real value

I have an optimization problem, whose answer is a real value, not an integer such as vertex cover and set cover. Therefore, the decision version of my problem is given an input and a real value $r$.

I have been able to reduce an NP-complete problem to my own problem in polynomial time. I also showed that my problem is NP.

Since the input to the decision problem is a real value, is this reduction valid and can I categorize my problem as NP-complete?

Edit: What if the precision of this real number is limited to $\frac{1}{polynomial(n)}$, which means that the solution is a real number with a polynomial precision.

### Fefe

#### Was ist das eigentlich alles für Militär-Hardware, ...

Was ist das eigentlich alles für Militär-Hardware, mit der die Polizei in Ferguson so herumrennt? Hier erklärt das mal ein Ex-Marine. Money Quote:
What we’re seeing here is a gaggle of cops wearing more elite killing gear than your average squad leader leading a foot patrol through the most hostile sands or hills of Afghanistan.
Er weist auch darauf hin, dass sie im Irak und in Afghanistan bemüht sind, nicht mit voller Kampfmontur herumzurennen, während ihre Politiker in den Medien was von Frieden und Völkerverständigung blubbern. Gerade um denen nicht ihre Botschaft kaputtzumachen. Aber daheim rennen die Cops in Schwarzenvierteln so herum.

Oh und als Marine ist er besonders angepisst, dass es Fotos gibt, in denen die Cops genau in die Kamera zielen. Den Marines wird in der Grundausbildung eingebläut, niemals mit einer Schusswaffe auf etwas zu zielen, wenn man nicht gerade darauf schießen will.

### StackOverflow

#### SBT 0.12.4 global configuration under Windows

Where should I put global sbt 0.12.4 configuration files (like plugins/build.sbt etc) under Windows?

I'm trying C:\Users\username\.sbt\plugins, but it doesn't work.

#### Play Framework 2.1: Scala: how to get the whole base url (including protocol)?

Currently I am able to get the host from the request, which includes domain and optional port. Unfortunately, it does not include the protocol (http vs https), so I cannot create absolute urls to the site itself.

object Application extends Controller {
def index = Action { request =>
Ok(request.host + "/some/path") // Returns "localhost:9000/some/path"
}
}


Is there any way to get the protocol from the request object?

#### In what scenario does self-type annotation provide behavior not possible with extends

I've tried to come up with a composition scenario in which self-type and extends behave differently and so far have not found one. The basic example always talks about a self-type not requiring the class/trait not having to be a sub-type of the dependent type, but even in that scenario, the behavior between self-type and extends seems to be identical.

trait Fooable { def X: String }
trait Bar1 { self: Fooable =>
def Y = X + "-bar"
}
trait Bar2 extends Fooable {
def Y = X + "-bar"
}
trait Foo extends Fooable {
def X = "foo"
}
val b1 = new Bar1 with Foo
val b2 = new Bar2 with Foo


Is there a scenario where some form of composition or functionality of composed object is different when using one vs. the other?

Update 1: Thanks for the examples of things that are not possible without self-typing, I appreciate the information, but I am really looking for compositions where self and extends are possible, but are not interchangeable.

Update 2: I suppose the particular question I have is why the various Cake Pattern examples generally talk about having to use self-type instead of extends. I've yet to find a Cake Pattern scenario that doesn't work just as well with extends

### AWS

#### DISA Authorizes AWS as First Commercial Cloud Approved for Sensitive Workloads

I am happy to be able to announce that AWS has achieved the first DoD Provisional Authorization under the DoD Cloud Security Model's at security impact levels 3-5! AWS previously received a DoD Provisional Authorization for security impact levels 1-2. This new Authorization covers AWS GovCloud (US) and DoD customers can now move forward with their deployments of applications processing controlled and for official use only unclassified information. As part of the Level 3-5 Authorization, our partners and DoD customers will be able to implement a wide range of DoD requirements necessary to protect their data at these levels, including AWS Direct Connect routing to the DoD's network, comprehensive computer network defense coverage, and Common Access Card (CAC) integration.

In March, AWS announced its compliance with security impact levels 1-2 for all AWS Regions in the US, demonstrating adherence to hundreds of controls. With this authorization, we have provided a means for DoD customers deploy applications at levels 3-5. DoD customers with prospective Level 3-5 applications should contact the ECSB to begin the deployment process.

With today's announcement, DoD agencies can leverage the AWS Provisional Authorization for security impact levels 1-2 and AWS GovCloud.s Provisional Authorization at levels 3-5 to evaluate AWS for their unclassified applications and workloads, achieve their own authorizations to use AWS, and transition DoD workloads into the AWS environment. DoD components and federal contractors can immediately request DoD compliance support by submitting a FedRAMP/DoD Compliance Support Request and begin to moving through the authorization process to achieve a DoD ATO for Levels 1-5 with AWS.

-- Jeff;

### StackOverflow

#### Jackson / JSON Custom Serializers for polymorphic classes in collections

I'm running into a problem using Jackson to serialize a list of polymorphic objects. Using this link as a starting point, I can recreate the issue. http://programmerbruce.blogspot.com/2011/05/deserialize-json-with-jackson-into.html

Classes:

@JsonTypeInfo(use = JsonTypeInfo.Id.CLASS,
include = JsonTypeInfo.As.PROPERTY,
property = "type")
trait IAnimal
{
def name : String

}

abstract class AbstractAnimal extends IAnimal
{
@BeanProperty
var name : String = _
}

class Cat extends AbstractAnimal
{
@BeanProperty
var favoriteToy : String = _
}

class Dog extends AbstractAnimal
{
@BeanProperty
var breed : String = _
@BeanProperty
var leashColor : String = _
}


My example code looks like this:

val zoo = new PolyZoo()
zoo.animals = List( Cat("fluffy", "catnip"), Dog("spike", "mutt", "red"))
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)
val json = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(zoo)
println(json)


and yields this result:

{
"animals" : [ {
"type" : "com.example.Cat",
"favoriteToy" : "catnip",
"name" : "fluffy"
}, {
"type" : "com.example.Dog",
"breed" : "mutt",
"leashColor" : "red",
"name" : "spike"
} ]
}


Now, for my real application, I need to define custom serializers for the different types (in this case, Cat and Dog). So, I wrote a CatSerializer. The implementation below will just throw an exception when called, but it works for this illustration:

class CatSerializer extends StdSerializer[Cat](classOf[Cat])
{
// should throw a NotImplementedException when called, just for test
def serialize(value: Cat, jgen: JsonGenerator, provider: SerializerProvider) = ???
}


I modified my main program to register the new serializer

val zoo = new PolyZoo()
zoo.animals = List( Cat("fluffy", "catnip"), Dog("spike", "mutt", "red"))
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)

// install the CatSerialier
val module = new SimpleModule("CustomStuff")
mapper.registerModule(module)

// continue on as before
val json = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(zoo)
println(json)


Unfortunately, I now get a JsonMappingException:

com.fasterxml.jackson.databind.JsonMappingException: Type id handling not implemented for type com.example.Cat (through reference chain: com.example.PolyZoo["animals"]->scala.collection.convert.IterableWrapper[0])
at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:210)
at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:189)
at com.fasterxml.jackson.databind.ser.std.StdSerializer.wrapAndThrow(StdSerializer.java:213)
at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serializeContents(CollectionSerializer.java:126)
at com.fasterxml.jackson.module.scala.ser.IterableSerializer.serializeContents(IterableSerializerModule.scala:30)
at com.fasterxml.jackson.module.scala.ser.IterableSerializer.serializeContents(IterableSerializerModule.scala:16)
at com.fasterxml.jackson.databind.ser.std.AsArraySerializerBase.serialize(AsArraySerializerBase.java:183)
at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:505)
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:639)
at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:152)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:114)
at com.fasterxml.jackson.databind.ObjectWriter._configAndWriteValue(ObjectWriter.java:800)
at com.fasterxml.jackson.databind.ObjectWriter.writeValueAsString(ObjectWriter.java:676)


So - it appears that adding a CatSerializer into the mix caused the Jackson/Json type system to get confused, so much that it never even got to my NotImplementedException. Any suggestions?

#### Ambiguous implicit values

I've been thinking I understand scala implicits until recently faced strange problem.

In my application I have several domain classes

case class Foo(baz: String)
case class Bar(baz: String)


And a class that is able to construct domain object from a string. It could be subclassed to do real deserialization it doesn't matter.

class Reads[A] {
def read(s: String): A = throw new Exception("not implemented")
}


Next, there are implicit deserializers

implicit val fooReads = new Reads[Foo]


And a helper to convert strings to one of domain classes

def convert[A](s: String)(implicit reads: Reads[A]): A = reads.read(s)


Unfortunatelly, when trying to use it

def f(s: String): Foo = convert(s)


I get compiler errors like

error: ambiguous implicit values:
def f(s: String): Foo = convert(s)
^


To me code seems simple and right. Reads[Foo] and Reads[Bar] is a completely different types, what is ambiguous about it?

The real code is much more complicated and uses play.api.libs.json but this simplified version is sufficient to reproduce the error.

### CompsciOverflow

#### Time Complexity of Apriori and Fp Growth

What is the time wise complexity of apriori and fp growth. I am searching from a week on internet there big o notation but unable to find any proper reference. Kindly help me in this please.

### QuantOverflow

#### Methods for "prompt month equivalent" exposure in commodities forwards/futures markets

It is common in commodities markets to hold many positions, both long and short, across a range of contract months beginning in the prompt month (today, September) to five or more years out. In general the prompt month exhibits the most volatility, and far out months exhibit the least (among the same exact products)

This makes total notional 'position' for a product quite misleading. For example if today I purchase 1 September 2014 contract, and sell 1 September 2019 contract, my net notional position is 0, suggesting the portfolio has no risk. In reality if the prompt month appreciates 10%, the 2019 contract will appreciate maybe 1%, and you have realized a significant gain.

Standard VAR calculation process captures and handles this well, but I want to be able to measure my true spot price exposure for a product class for purposes of separating position limits from VAR limits.

So, the question,

What is the industry standard model for condensing a strip of forward contracts into a single exposure number "FME" such that it is reasonable to approximate PNL by taking FME*spot price. (presume you are given a corr/cov matrix)?

The most relevant article I can find is here. My calculations from that paper seem somewhat accurate and it covers the subject quite well subjectively, but it has no citations and I doubt my peers would accept it as a source for policy.

http://kiodex.com/our_library/FME_Whitepaper.pdf

### StackOverflow

#### How to disable Gradle daemon in IntelliJ Idea?

I need to disable the Gradle daemon in IntelliJ Idea, because somehow Scala plugin is not working with the daemon (the compilation fails with NullPointerException). I have tried to edit my IntelliJ Gradle build configurations to include a JVM system parameter "-Dorg.gradle.daemon=false": (http://i.stack.imgur.com/x4P98.png)

Also I've tried to use "--no-daemon" flag at the same place (Script parameters and VM options). Also I've tried to specify these options in the "Preferences->Gradle" menu of IntelliJ. None of these attempts gave any result, the daemon continue to start, so I have to kill it before running/compiling for the second time. (http://i.stack.imgur.com/syBgC.png)

How can I disable the Gradle daemon usage in IntelliJ Idea?

### QuantOverflow

#### Modelling currency exchange rates timeseries data across re-denomation dates

I am working with data for an exotic currency, that has been re-denominated a couple of times during the twenty years of data that I have.

What is the best way of 'normalising' the data, so that I can work with the data, although it contains two 'switch over' dates on which the currency was re-denominated?

### CompsciOverflow

#### Optimal displacement on a board [closed]

A $N$*$M$ matrix is given where each element of matrix is positive integer greater than or equal to zero. One can move on the matrix by steps. Each step is a succession of $p$ jumps either rightwards or downwards, i.e either from $A[i][j]$ to $A[i+1][j]$ or from $A[i][j]$ to $A[i][j+1]$, such that $p\leq A[i_0][j_0]$, where $i_0$ and $j_0$ are the coordinates at the beginning of the step.

The problem is to give an algorithm to find the minimum number of jumps to reach the cell ($N$,$M$) from $(0,0)$;

$N$ is number of rows and $M$ is number of columns.

$\forall i,j \; A[i][j]\geq 0$

can someone help me with this.

thanks

### TheoryOverflow

Ok, so before you may mark this as off-topic, this has something to do with my computer. But I can't seem to find the right topic for this. IT IS NOT a web app problem!

I can not seem to connect to YouTube at all. If I connect directly to YouTube (http://youtube.com) I get this:

And if I try to connect via video directly (http://youtube.com/watch?v=), I get this:

Here's how this isn't a web issue: I have malware on my computer I believe. And the question here is, how do I get rid of it, or what possible issue could this be? I can verify this is not a website issue or wifi issue, I've tried to connect on another computer in my wifi, and it worked. It is the local machine issue. I've tried to get rid of malware at my best and also have tried to disable possible virus extensions. All can out as the same result: no help.

### Dave Winer

Scripting News: What dreams mean.

### StackOverflow

#### Are Databases and Functional Programming at odds?

I've been a web developer for some time now, and have recently started learning some functional programming. Like others, I've had some significant trouble apply many of these concepts to my professional work. For me, the primary reason for this is I see a conflict between between FP's goal of remaining stateless seems quite at odds with that fact that most web development work I've done has been heavily tied to databases, which are very data-centric.

One thing that made me a much more productive developer on the OOP side of things was the discovery of object-relational mappers like MyGeneration d00dads for .Net, Class::DBI for perl, ActiveRecord for ruby, etc. This allowed me to stay away from writing insert and select statements all day, and to focus on working with the data easily as objects. Of course, I could still write SQL queries when their power was needed, but otherwise it was abstracted nicely behind the scenes.

Now, turning to functional-programming, it seems like with many of the FP web frameworks like Links require writing a lot of boilerplate sql code, as in this example. Weblocks seems a little better, but it seems to use kind of an OOP model for working with data, and still requires code to be manually written for each table in your database as in this example. I suppose you use some code generation to write these mapping functions, but that seems decidedly un-lisp-like.

(Note I have not looked at Weblocks or Links extremely closely, I may just be misunderstanding how they are used).

So the question is, for the database access portions (which I believe are pretty large) of web application, or other development requiring interface with a sql database we seem to be forced down one of the following paths:

1. Don't Use Functional Programming
2. Access Data in an annoying, un-abstracted way that involves manually writing a lot of SQL or SQL-like code ala Links
3. Force our functional Language into a pseudo-OOP paradigm, thus removing some of the elegance and stability of true functional programming.

Clearly, none of these options seem ideal. Has found a way circumvent these issues? Is there really an even an issue here?

Note: I personally am most familiar with LISP on the FP front, so if you want to give any examples and know multiple FP languages, lisp would probably be the preferred language of choice

PS: For Issues specific to other aspects of web development see this question.

### StackOverflow

#### Type mismatch with Array of Array in Scala

I'm trying to build an array of an array to give it as a argument to a method. The value of inner arrays are any kind of data (AnyVal) such as Int or Double.

The method's signature is as follows:

def plot[T <: AnyVal](config:Map[String, String], data:Array[Array[T]]): Unit = {


This is the code:

val array1 = (1 to 10).toArray
val array2 = ArrayBuffer[Int]()
array1.foreach { i =>
array2 += (getSize(summary, i))
}
val array3 = new Array[Int](summary.getSize())

val arrays = ArrayBuffer[Array[AnyVal]](array1, array2.toArray, array3) # <-- ERROR1
Gnuplotter.plot(smap, arrays.toArray) # <-- ERROR2


However, I have two errors:

What might be wrong?

### StackOverflow

#### What can cause the Squeryl table object to be not found?

I am encountering a compile time error while attempting to get Squeryl example code running. The following code is based on the My Adventures in Coding blog post about connecting to SQLServer using Squeryl.

import org.squeryl.adapters.MSSQLServer
import org.squeryl.{ SessionFactory, Session}
import com.company.model.Consumer

class SandBox {
def tester() = {
val databaseConnectionUrl = "jdbc:jtds:sqlserver://myservername;DatabaseName=mydatabasename"

Class.forName("net.sourceforge.jtds.jdbc.Driver")

SessionFactory.concreteFactory = Some(()=>
Session.create(
new MSSQLServer))

val consumers = table[Consumer]("Consumer")
}
}


I believe I have the build.sbt file configured correctly to import the Squeryl & JTDS libraries. When running SBT after adding the dependencies it appeared to download the libraries need.

libraryDependencies ++= List (
"org.squeryl" %% "squeryl" % "0.9.5-6",
"net.sourceforge.jtds" % "jtds" % "1.2.4",
Company.teamcityDepend("company-services-libs"),
Company.teamcityDepend("ssolibrary")
) ::: Company.teamcityConfDepend("company-services-common", "test,gatling")


I am certain that at least some of the dependencies were successfully installed. I base this on the fact that the SessionFactory code block compiles successfully. It is only the line that attempts to setup a map from the Consumer class to the Consumer SQLServer table.

val consumers = table[Consumer]("Consumer")


This line causes a compile time error to be thrown. The compile is not able to find the table object.

[info] Compiling 8 Scala sources to /Users/pbeacom/Company/integration/target/scala-2.10/classes...
[error]     val consumers = table[Consumer]("Consumer")


The version of Scala in use is 2.10 and if the table line is commented the code compiles successfully. Use of the table object to accomplish data model mappings is nearly ubiquitous in the Squeryl examples I'm been researching online and no one else seems to have encountered a similar problem.

### /r/compsci

#### What is the difference between getting a Open Source licence, like MIT, and doing nothing?

submitted by wastapunk

### Lobsters

#### How to Keep Your Neighbours in Order [Conor McBride]

In this paper McBride uses dependent types to define a set of data types whose elements are known (at compile time) to be in order. Generic programs for insertion and flattening are put together to build algorithms like quicksort and deletion from balanced 2-3 trees in a correct-by-construction way (without having to work with proofs).

It is exhilarating being drawn to one’s code by the strong currents of a good design. But that happens only in the last iteration: we are just as efficiently dashed against the rocks by a bad design, and the best tool to support recovery remains, literally, the drawing board.

### Dave Winer

Scripting News: The dog days of summer.

### StackOverflow

#### GridFS resizing image on the fly in Scala/Java/Play Framework

Here is my code for serving an image. (no resizing) I want to resize the image before serve. So I tried put the size in URL like this /img/24x24/filename.jpg I tried so many methods before I ask and it didn't work, anyone implement this before? please help. Thanks.

val gridFS = new GridFS(db, "pics")
val file = gridFS.find(BSONDocument("filename" -> filename))

serve(gridFS, file).map(_.withHeaders(CONTENT_DISPOSITION -> "inline;", CONTENT_TYPE -> "image/jpeg")) recover {
case e => NotFound(
...
)
}


This is another method I used. Also work.

    val t = file.headOption.filter(_.isDefined).map(_.get).map { file =>
val enumerateContent = gridFS.enumerate(file)
SimpleResult(
body = enumerateContent
).withHeaders(CONTENT_DISPOSITION -> "inline;", CONTENT_TYPE -> "image/jpeg")
}


### StackOverflow

#### Specifying logarithmic axis values (labels and ticks) in JFreeChart

I am struggling with LogAxis to get sensible frequency labels, e.g. using an equal tempered scale with A4 = 440 Hz, such as this table, I want labels to appear for example at

(30 to 120 by 2).map(midicps).foreach(println)

46.249302
51.91309
58.270466
65.406395
73.4162
82.40688
92.498604
103.82618
116.54095
130.81279
146.83238
164.81378
184.99721
207.65234
233.08188
261.62558
293.66476
329.62756
369.99442
415.3047
466.16376
523.25116
587.3295
...
4698.6367
5274.0405
5919.9106
6644.8755
7458.621
8372.019


Hertz, where

def midicps(d: Double): Double = 440 * math.pow(2, (d - 69) / 12)


In other words, I have twelve divisions per octave (doubling of value), with a fixed frequency being 440.0. I happen to have a lower bound of 32.7 and upper bound of 16700.0 for the plot.

My first attempt:

import org.jfree.chart._
val pl = new plot.XYPlot
val yaxis = new axis.LogAxis
yaxis.setLowerBound(32.7)
yaxis.setUpperBound(16.7e3)
yaxis.setBase(math.pow(2.0, 1.0/12))
yaxis.setMinorTickMarksVisible(true)
yaxis.setStandardTickUnits(axis.NumberAxis.createStandardTickUnits())
pl.setRangeAxis(yaxis)
val ch = new JFreeChart(pl)
val pn = new ChartPanel(ch)
new javax.swing.JFrame {
pack()
setVisible(true)
}


This gives my labels which do not fall into any of the above raster points:

Any ideas how to enforce my raster?

### Daniel Lemire

#### Expert performance and training: what we really know

Movies such as Good Will Hunting tell beautiful stories about young people able to instantly master difficult topics, without any effort on their part.

That performance is unrelated to effort is an appealing belief. Whether you perform well or poorly is not your fault. Some go further and conclude that success and skill levels are primarily about genetics. That is an even more convenient observation: the quality of your parenting or education becomes irrelevant. If kids raised in the ghetto do poorly, it is because they inherited the genes of their parents! I personally believe that poor kids tend to do poorly in school primarily because they work less at it (e.g., kids from the ghetto will tend to pass on their homework assignments for various reasons).

A recent study by Macnamara et al. suggests that practice explained less than 1% of the variance in performance within professions, and generally less than 25% of the variance in other activities.

It is one of several similar studies attempting to debunk the claim popularized by Gladwell that expert performance requires 10,000 hours of deliberate training.

Let us get one source of objection out of the way: merely practicing is insufficient to reach world-expert levels of performance. You have to practice the right way, you have to put in the mental effort, and you have to have the basic dispositions. (I can never be a star basketball player.) You also need to live in the right context. Meeting the right people at the right time can have a determining effect on your performance.

But it is easy to underestimate the value of hard work and motivation. We all know that Kenyan and Ethiopian make superb long-distance runners. Right? This is all about genetics, right? Actually, though their body type predispose them to good performance, factors like high motivation and much training in the right conditions are likely much more important than any one specific gene.

Time and time again, I have heard people claim that mathematics and abstract thinking was just beyond them. I also believe these people when they point out that they have put many hours of effort… However, in my experience, most students do not know how to study properly. You should never, ever, cram the night before an exam. You should not do your homework in one pass: you should do it once, set it aside, and then revise it. You absolutely need to work hard at learning the material, forget it for a time, and then work at it again. That is how you retain the material on the long run. You also need to have multiple references, repeatedly train on many problems and so on.

I believe that poor study habits probably explain much of the cultural differences in school results. Some cultures seem to do a lot more to show their kids how to be intellectually efficient.

I also believe that most people overestimate the amount of time and effort they put on skills they do not yet master. For example, whenever I face someone who failed to master the basics of programming, they are typically at a loss to describe the work they did before giving up. Have they been practicing programming problems every few days for months? Or did they just try for a few weeks before giving up? The latter appears much more likely as they are not able to document how they spent hundreds of hours. Where is all the software that they wrote?

Luck is certainly required to reach the highest spheres, but without practice and hard work, top level performance is unlikely. Some simple observations should convince you:

• There are few people who make world-class contributions at once… there are few polymaths. It is virtually impossible for someone to become a world expert several distinct activities. This indicates that much effort is required for world-class performance in any one activity. This is in contrast with a movie like Good Will Hunting where the main character appears to have effortlessly acquired top-level skills in history, economics, mathematics.

A superb scientist like von Neumann was able to make lasting contributions in several fields, but this tells us more about his strategies than the breadth of his knowledge:

Von Neumann was not satisfied with seeing things quickly and clearly; he also worked very hard. His wife said “he had always done his writing at home during the night or at dawn. His capacity for work was practically unlimited.” In addition to his work at home, he worked hard at his office. He arrived early, he stayed late, and he never wasted any time. (…) He wasn’t afraid of anything. He knew a lot of mathematics, but there were also gaps in his knowledge, most notably number theory and algebraic toplogy. Once when he saw some of us at a blackboard staring at a rectangle that had arrows marked on each of its sides, he wanted to know that what was. “Oh just the torus, you know – the usual identification convention.” No, he didn’t know. The subject is elementary, but some of it just never crossed his path, and even though most graduate students knew about it, he didn’t. (Halmos, 1973)

• In the arts and sciences, world experts are consistently in their 30s and 40s, or older. This suggests that about 10 years of hard work are needed to reach world-expert levels of performance. There are certainly exceptions. Einstein and Galois were in their 20s when they did their best work. However, these exceptions are very uncommon. And even Einstein, despite being probably the smartest scientist of his century, only got his PhD at 26. We know little about Galois except that he was passionate, even obsessive, about Mathematics as a teenager and he was homeschooled.
• Even the very best improve their skills only gradually. Musicians or athletes do not suddenly become measurably better from one performance to the other. We see them improve over months. This suggests that they need to train and practice.

When you search in the past of people who burst on the scene, you often find that they have been training for years. In interviews with young mathematical prodigies, you typically find that they have been teaching themselves mathematics with a passion for many years.

A common counterpoint is to cite studies on identical twins showing that twins raised apart exhibit striking similarities in terms of skills. If you are doing well in school, and you have an identical twin raised apart, he is probably doing well in school. This would tend to show that skills are genetically determined. There are two key weaknesses to this point. Firstly, separated twins tend to live in similar (solidly middle class) homes. Is it any wonder that people who are genetically identical and live in similar environment end up with similar non-extraordinary abilities? Secondly, we have virtually no reported case of twins raised apart reaching world-class levels. It would be fascinating if twins, raised apart, simultaneously and independently reached Einstein-level abilities… Unfortunately, we have no such evidence.

As far as we know, if you are a world-class surgeon or programmer, you have had to work hard for many years.

### /r/emacs

#### Using id-utils/mkid/gid with C++

I've been a satisfied id-utils user for the past year. I was recently confronted with C++ code and appear to be running into problems. For example:

1st file: class.h: -------------------- ... class class_A{ func_A(); ... }; ... 2nd file: lib.cpp -------------------- ... class_A::func_A(){ ... } ... 3rd file main.cpp --------------------- main(){ ... class_A::func_A(); ... } 

When I run mkid/gid on the function: func_A, the only hit I get is the member function declaration in the 1st file. Am I missing something here. Is there a trick to get 'gid' to work better with C++ code?

I checked man pages and the email archives, but wasn't able to come up with anything helpful. When I run 'gid' on func_A, I'd like to get hits for all 3 file occurrences above.

submitted by sbay

### StackOverflow

#### How to simplify the scala code which continually reads next page and returns a \/ type

I'm writing some scala code, found it a little bit complex, and trying to make it simpler.

There is a function can read the content from a url, which is a json:

{
"items": ["aaa", "bbb", "ccc"],
"next": "http://some.com/page2"
}


Then parse it to a page instance:

def loadPage(url:String): Try[Page] = { ignore the code here }

case class Item(content:String)
case class Page(items: List[Item], next: Option[String])


You can see in the page, it contains some items, and also a next url. If it's the last page, there is no next field provided.

Now I want to write a function, which takes a staring url, will read it and all the next pages, and will return a Throwable \/ List[Item] type. (Here, \/ is the Either type provided by scalaz)

def readItems(startingUrl: String): Throwable \/ List[Item] = {
???
}


Now I have a solution, which uses recursive(not tail-recursive) function:

def readItems(startLink: String): Throwable \/ List[Item] = {

def fetchChanges(link: String): Throwable \/ List[Item] = {
case Success(page) => page.next.fold(page.items.right[Throwable]) { nextLink =>
}
case Failure(NonFatal(e)) => e.left
}
}

}


Then I think it's better to provided a tail-recursive one, to avoid stack-overflow if the page chain is too long:

def readItems2(startLink: String): Throwable \/ List[Item] = {

@tailrec
def fetchChanges(link: String, result: Throwable \/ List[Item]): Throwable \/ List[Item] = {
if (result.isLeft)
result
else
case Success(page) => page.next match {
case _ => result
}
case Failure(NonFatal(e)) => e.left
}
}

}


You can see the code is pretty complex. Is there any way to make it a little bit simpler? (e.g. to use some features of scalaz)

Update: the return type of loadPage can be changed if needed

I am diving into Scala and noticed sbt. I have been quite happy with Gradle in java/groovy projects, and I know there's a scala plugin for Gradle.

What could be good reasons to favour sbt over Gradle in a Scala project?

### CompsciOverflow

#### What's time complexity of this algorithm for "Work Break"?

Word Break(Dynamic Programming)
Given a string s and a dictionary of words dict, add spaces in s to construct a sentence where each word is a valid dictionary word.

Return all such possible sentences.

For example, given

• s = "catsanddog",dict = ["cat", "cats", "and", "sand", "dog"].
• A solution is ["cats and dog", "cat sand dog"].

Question:

• Time complexity = ?
• Space complexity = ?

Personally I think,

• Time complexity = O(n!), n is the length of the given string.
• Space complexity = O(n).

Doubt:
Seems if without DP, the time complexity = O(n!), but with DP, what is that?

Solution: DFS+Backtracking(Recursion) + DP:
Code: Java

public class Solution {
public List<String> wordBreak(String s, Set<String> dict) {
List<String> list = new ArrayList<String>();

// Input checking.
if (s == null || s.length() == 0 ||
dict == null || dict.size() == 0) return list;

int len = s.length();

// memo[i] is recording,
// whether we cut at index "i", can get one of the result.
boolean memo[] = new boolean[len];
for (int i = 0; i < len; i ++) memo[i] = true;

StringBuilder tmpStrBuilder = new StringBuilder();
helper(s, 0, tmpStrBuilder, dict, list, memo);

return list;
}

private void helper(String s, int start, StringBuilder tmpStrBuilder,
Set<String> dict, List<String> list, boolean[] memo) {

// Base case.
if (start >= s.length()) {
return;
}

int listSizeBeforeRecursion = 0;
for (int i = start; i < s.length(); i ++) {
if (memo[i] == false) continue;

String curr = s.substring(start, i + 1);
if (!dict.contains(curr)) continue;

// Have a try.
tmpStrBuilder.append(curr);
tmpStrBuilder.append(" ");

// Do recursion.
listSizeBeforeRecursion = list.size();
helper(s, i + 1, tmpStrBuilder, dict, list, memo);

if (list.size() == listSizeBeforeRecursion) memo[i] = false;

// Roll back.
tmpStrBuilder.setLength(tmpStrBuilder.length() - curr.length() - 1);
}
}
}


#### Is computation expression the same as monad?

I'm still learning functional programming (with f#) and I recently started reading about computation expressions. I still don't fully understand the concept and one thing that keeps me unsure when reading all the articles regarding monads (most of them are written basing on Haskell) is the relation between computation expressions and monads.

Having written all that, here's my question (two questions actually):

Is every F# computation expression a monad? Can every monad be expressed with F# computation expression?

I've read this post of Tomas Petricek and if I understand it well, it states that computation expressions are more than monads, but I'm not sure if I interpret this correctly.

### StackOverflow

#### Making one Option[List[MyType]] from three different Option[List[MyType]]

I have

def searchListProducts1 = models.Products.IndivProduct.getProductsFromJsObjectList(productsTextSearchDescription)
def searchListProducts2 = models.Products.IndivProduct.getProductsFromJsObjectList(productsTextSearchName)
def searchListProducts3 = models.Products.IndivProduct.getProductsFromJsObjectList(productsTextSearchIngredients)


where each is Option[List[MyType]]

I want to "merge" them all together (is that a fold?) so that I have just one Option[List[MyType]]

Thanks

### QuantOverflow

#### multi factor equity model exposures not as expected

I'm researching an equity multi factor model.

It contains three factors, say A, B & C. The factors are weighted as such,

         60%        40%
(70% A + 30% B) + C


I am running a back test on this model. When running the model I constrain the risk factors (momentum, beta & size etc) to have a limited exposure. So ideally most of the return should be explained by my model. Looking at the exposures of my factors A, B & C in the last 12 months the exposure of B is much larger than A. I am trying to understand why this might be but not sure where to start?

### StackOverflow

#### sbt/ivy failing to resolve wildcard ivy dependencies on a filesystem resolver

I am using the ~/.sbt/repositories file to tell sbt 0.13.5 which repositories to retrieve from. That file only contains local and a file:// repository with a custom layout that closely resembles the standard sbt one, with sbtVersion and scalaVersion optional fields represented.

When it comes to resolving dependencies for my project, I've noticed weird behavior:

• Resolving exact dependencies works fine
• latest.integration also works fine
• Wildcard resolution of the form x.y.+ doesn't find anything, and instead seems to be searching for literal patterns. I get errors of the form:
    [warn] ==== myrepo: tried
[warn]   file://path/to/my/repo/myorg/mypackage_2.10/[revision]/ivy-[revision].xml
[info] Resolving myorg#mypackage_2.10;2.7.1.+ ...


which as you can see, mention the repo layout pattern explicitly.

I'm mostly confused because the resolver works fine for anything but the + wildcard dependencies. I tried poking around the ivy documentation to figure out if certain resolvers (like the file:// resolver I'm using) don't implement certain kinds of dependency resolution, but that didn't seem to be a thing, so I'm mostly stumped. Any idea what I can do to make it work, or what might be causing it?

#### Scala IDE not working properly in Eclipse Luna for Java EE

I've tried and re-tried to install the Scala IDE in several different ways in the Java EE specific version of Eclipse, but I just can't get it to work.

The Scala first-time configuration screen doesn't appear, I can't create Scala projects, and the Scala perspective is nowhere to be found...

I've used the Scala IDE before, and it always worked flawlessly...

Going to Help -> About Eclipse -> Installation Details I can see that the IDE is indeed installed, so why it doesn't work is beyond me...

Any help in resolving this issue?

### TheoryOverflow

#### How to conduct a computational analysis of java program

In my bachelor degree's thesis I've developed an algorithm for recommender systems which uses personalized PageRank with some particular features as nodes. In the recommender systems' field, there is the opportunity to understand how good is the algorithm using some accuracy metrics (MAE, RMSE, F-measure, ecc.).

What I want to do

In my case I don't want to limit my analysis on the accuracy field, but I want to extend my work with a proper discussion on the computational complexity of the developed algorithm. During my degree I've never done something like that so I don't know how to conduct it in a formal and proper way.

What I've thought

Actually I'm calculating (using some Java methods) the time needed by each algorithm in order to complete the specific task. After that, I will draw a plot in which I describe how much you need to pay in time in order to get a specific accuracy level.

Questions

• Is there some good work from which I can take inspiration from (papers, books, ecc.)?
• Can you give me some tips or simply tell your experience in the field in order to conduct this kind of analysis in a proper way?

If there is something not clear, please leave me a comment and I'll improve my question. Thank you in advance

### Planet Clojure

#### Burlington Ruby Conference Talk: How to Consume Lots of Data

Concurrency is all the rage. When you have tons of data being shoved down your throat, you need all the help you can get. All the cool kids are turning to alternatives to try and keep up: node.js, clojure, erlang, elixir, Go. Popular thinking is that Ruby is too slow and won’t scale. But our favorite friend can support it very well.

In my talk at the 2014 Burlington Ruby Conference, I took a look at the actor pattern in Ruby with Celluloid and compared it to similar solutions in other languages.

### StackOverflow

#### Best way to implement "zipLongest" in Scala

I need to implement a "zipLongest" function in Scala; that is, combine two sequences together as pairs, and if one is longer than the other, use a default value. (Unlike the standard zip method, which will just truncate to the shortest sequence.)

I've implemented it directly as follows:

def zipLongest[T](xs: Seq[T], ys: Seq[T], default: T): Seq[(T, T)] = (xs, ys) match {
case (Seq(), Seq())           => Seq()
case (Seq(), y +: rest)       => (default, y) +: zipLongest(Seq(), rest, default)
case (x +: rest, Seq())       => (x, default) +: zipLongest(rest, Seq(), default)
case (x +: restX, y +: restY) => (x, y) +: zipLongest(restX, restY, default)
}


Is there a better way to do it?

### Planet Theory

#### Hashing Summer School

Back in July I took part in the Hashing Summer School in Copenhagen.  This was nominally set up by me, Rasmus Pagh, and Mikkel Thorup, though Mikkel was really the host organizer that put it all together.

The course materials are all online here.  One thing that was a bit different is that it wasn't just lectures -- we really did make more of a "summer school" by putting together a lot of (optional) exercises, and leaving time for people to work through some of them in teams.  I am hoping the result is a really nice resource.  There are lectures with the video online, and also the slides and exercises.  Students could go through whatever parts they like on their own, or people might find the material useful in preparing their own lectures when teaching graduate-level topics in hashing.

### StackOverflow

#### installing JDK8 on Windows XP - advapi32.dll error

I downloaded JDK8 build b121 and while trying to install I'm getting the following error:

the procedure entry point RegDeleteKeyExA could not be located in the dynamic link library ADVAPI32.dll

The operating system is Windows XP, Version 2002 Service Pack 3, 32-bit.

### Planet Theory

#### Turing's Oracle

My daughter had a summer project to read and summarize some popular science articles. Having heard me talk about Alan Turing more than a few times, she picked a cover story from a recent New Scientist. The cover copy says "Turing's Oracle: Will the universe let us build the ultimate thinking machine" sounds like an AI story but in fact more of an attack on the Church-Turing thesis. The story is behind a paywall but here is an excerpt:
He called it the "oracle". But in his PhD thesis of 1938, Alan Turing specified no further what shape it might take...Turing has shown with his universal machine that any regular computer would have inescapable limitations. With the oracle, he showed how you might smash through them.
This is a fundamental misinterpretation of Turing's oracle model. Here is what Turing said in his paper Systems of Logic Based on Ordinals, Section 4.
Let us suppose we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of the oracle apart from saying it cannot be a machine. (emphasis mine)
The rest of the section defines the oracle model and basically argues that for any oracle O, the halting problem relative to O is not computable relative to O. Turing is arguing here that there is no single hardest problem, there is always something harder.

If you take O to be the usual halting problem then a Turing machine equipped with O can solve the halting problem, just by querying the oracle. But that doesn't mean that you have some machine that solves the halting problem for, as Turing has so eloquently argued in Section 9 of his On Computable Numbers, no machine can compute such an O. Turing created the oracle model, not because he thought it would lead to a process that would solve the halting problem, but because it allowed him to show there are problems even more difficult.

Turing's oracle model, like so much of his work, has played a major role in both computability and computational complexity theory. But one shouldn't twist this model to think the oracle could lead to machines that solve non-computable problems and it is sacrilege to suggest that Turing himself would think that.

### StackOverflow

#### How do I create an explicit companion object for a case class which behaves identically to the replaced compiler provided implicit companion object?

I have a case class defined as such:

case class StreetSecondary(designator: String, value: Option[String])


I then define an explicit companion object:

object StreetSecondary {
//empty for now
}


The act of defining the explicit companion object StreetSecondary causes the compiler produced "implicit companion object" to be lost; i.e. replaced with no ability to access the compiler produced version. For example, the tupled method is available on case class StreetSecondary via this implicit companion object. However, once I define the explicit companion object, the tupled method is "lost".

So, what do I need to define/add/change to the above StreetSecondary explicit companion object to regain all the functionality lost with the replacement of the compiler provided implicit companion object? And I want more than just the tupled method restored. I want all functionality (for example, including extractor/unapply) restored.

Thank you for any direction/guidance you can offer.

UPDATE 1

I have done enough searching to discover several things:

A) The explicit companion object must be defined BEFORE its case class (at least that is the case in the Eclipse Scala-IDE WorkSheet, and the code doesn't work in the IntelliJ IDE's WorkSheet regardless of which comes first).

B) There is a technical trick to force tupled to work (thank you drstevens): (StreetSecondary.apply _).tupled While that solves the specific tupled method problem, it still doesn't accurately or completely describe what the scala compiler is providing in the implicit companion object.

C) Finally, the explicit companion object can be defined to extend a function which matches the signature of the parameters of the primary constructor and returns an instance of the case class. It looks like this:

object StreetSecondary extends ((String, Option[String]) => StreetSecondary) {
//empty for now
}


Again, I am still not confident accurately or completely describes what the scala compiler is providing in the implicit companion object.

#### Lein test failing with No such var: leiningen.util.injected/add-hook

When I ran lein test I get this error in a new project, without touching tests or test related configuration:

Exception in thread "main" java.lang.RuntimeException: No such var: leiningen.util.injected/add-hook, compiling:(NO_SOURCE_PATH:1)
at clojure.lang.Compiler.analyze(Compiler.java:6235)
at clojure.lang.Compiler.analyze(Compiler.java:6177)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3452) at clojure.lang.Compiler.analyzeSeq(Compiler.java:6411) at clojure.lang.Compiler.analyze(Compiler.java:6216) at clojure.lang.Compiler.analyze(Compiler.java:6177) at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5572) at clojure.lang.Compiler.analyzeSeq(Compiler.java:6409) at clojure.lang.Compiler.analyze(Compiler.java:6216) at clojure.lang.Compiler.analyze(Compiler.java:6177) at clojure.lang.Compiler$IfExpr$Parser.parse(Compiler.java:2597) at clojure.lang.Compiler.analyzeSeq(Compiler.java:6409) at clojure.lang.Compiler.analyze(Compiler.java:6216) at clojure.lang.Compiler.analyzeSeq(Compiler.java:6397) at clojure.lang.Compiler.analyze(Compiler.java:6216) at clojure.lang.Compiler.analyze(Compiler.java:6177) at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5572) at clojure.lang.Compiler.analyzeSeq(Compiler.java:6409) at clojure.lang.Compiler.analyze(Compiler.java:6216) at clojure.lang.Compiler.analyze(Compiler.java:6177) at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5572) at clojure.lang.Compiler$TryExpr$Parser.parse(Compiler.java:2091) at clojure.lang.Compiler.analyzeSeq(Compiler.java:6409) at clojure.lang.Compiler.analyze(Compiler.java:6216) at clojure.lang.Compiler.analyze(Compiler.java:6177) at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5572) at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5008)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3629) at clojure.lang.Compiler.analyzeSeq(Compiler.java:6407) at clojure.lang.Compiler.analyze(Compiler.java:6216) at clojure.lang.Compiler.eval(Compiler.java:6462) at clojure.lang.Compiler.eval(Compiler.java:6455) at clojure.lang.Compiler.eval(Compiler.java:6431) at clojure.core$eval.invoke(core.clj:2795)
at clojure.main$eval_opt.invoke(main.clj:296) at clojure.main$initialize.invoke(main.clj:315)
at clojure.main$null_opt.invoke(main.clj:348) at clojure.main$main.doInvoke(main.clj:426)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:405)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:518)
at clojure.main.main(main.java:37)
Caused by: java.lang.RuntimeException: No such var: leiningen.util.injected/add-hook
at clojure.lang.Util.runtimeException(Util.java:156)
at clojure.lang.Compiler.resolveIn(Compiler.java:6694)
at clojure.lang.Compiler.resolve(Compiler.java:6664)
at clojure.lang.Compiler.analyzeSymbol(Compiler.java:6625)
at clojure.lang.Compiler.analyze(Compiler.java:6198)
... 42 more


### QuantOverflow

#### Using linear regression on (lagged) returns of one stock to predict returns of another

Suppose I want to build a linear regression to see if returns of one stock can predict returns of another. For example, let's say I want to see if the VIX return on day X is predictive of the S&P return on day (X + 30). How would I go about this?

The naive way would be to form pairs (VIX return on day 1, S&P return on day 31), (VIX return on day 2, S&P return on day 32), ..., (VIX return on day N, S&P return on day N + 30), and then run a standard linear regression. A t-test on the coefficients would then tell if the model has any real predictive power. But this seems wrong to me, since my points are autocorrelated, and I think the p-value from my t-test would underestimate the true p-value. (Though IIRC, the t-test would be asymptotically unbiased? Not sure.)

So what should I do? Some random thoughts I have are:

• Take a bunch of bootstrap samples on my pairs of points, and use these to estimate the empirical distribution of my coefficients and p-values. (What kind of bootstrap do I run? And should I be running the bootstrap on the coefficient of the model, or on the p-value?)
• Instead of taking data from consecutive days, only take data from every K days. For example, use (VIX return on day 1, S&P return on day 31), (VIX return on day 11, S&P return on day 41), etc. (It seems like this would make the dataset way too small, though.)

Are any of these thoughts valid? What are other suggestions?

### StackOverflow

#### Error: scala: No 'scala-library*.jar' in Scala compiler library

Environment: Play 2.3.0/Scala 2.11.1/IntelliJ 13.1

I used Typesafe Activator 1.2.1 to create a new project with Scala 2.11.1. After the project was created, I ran gen-idea. The generated IDEA project fails to compile with the error:

Error: scala: No 'scala-library*.jar' in Scala compiler library in test

Am I doing something wrong? Workaround?

### Planet Clojure

#### Common Lisp or Clojure Developer, Adelaide or remote

Common Lisp or Clojure Developer
A fantastic opportunity for a Common Lisp developer or Clojure developer that is fast, adaptable and can work independently or fit in well into a team

• Experience with Common Lisp or Clojure is a MUST
• Adelaide or the opportunity to work 100% remotely!
• Great career opportunity – Great Salary or Contract rate

Experience with Common Lisp or Clojure is a must (could be non-commercial) as well as general knowledge of relational databases and web technologies.

This role could be 100% remote for the right person, to join a top class team and on a great product which could became the Common Lisp application with largest customer base in the world.   The successful applicant will join a small, focused team in maintaining and furthering the development of a leading multivariate testing platform.

Familiarity in the following areas would be considered a plus: backend web server technology, Javascript, PostgreSQL, SQL Server, statistics, distributed computing, Lispworks, any distributed version control system. A high degree of autonomy and self-motivation will be expected.

This is a great career building opportunity and salary package on offer! Although will consider contractors!

If this is of interest I’d be keen to discuss with you, please email me onstewart@totalresource.com.au or call 0061 (0)415 344 427

### Edward Z. Yang

#### The fundamental problem of programming language package management

Why are there so many goddamn package managers? They sprawl across both operating systems (apt, yum, pacman, Homebrew) as well as for programming languages (Bundler, Cabal, Composer, CPAN, CRAN, CTAN, EasyInstall, Go Get, Maven, npm, NuGet, OPAM, PEAR, pip, RubyGems, etc etc etc). "It is a truth universally acknowledged that a programming language must be in want of a package manager." What is the fatal attraction of package management that makes programming language after programming language jump off this cliff? Why can't we just, you know, reuse an existing package manager?

You can probably think of a few reasons why trying to use apt to manage your Ruby gems would end in tears. "System and language package managers are completely different! Distributions are vetted, but that's completely unreasonable for most libraries tossed up on GitHub. Distributions move too slowly. Every programming language is different. The different communities don't talk to each other. Distributions install packages globally. I want control over what libraries are used." These reasons are all right, but they are missing the essence of the problem.

The fundamental problem is that programming languages package management is decentralized.

This decentralization starts with the central premise of a package manager: that is, to install software and libraries that would otherwise not be locally available. Even with an idealized, centralized distribution curating the packages, there are still two parties involved: the distribution and the programmer who is building applications locally on top of these libraries. In real life, however, the library ecosystem is further fragmented, composed of packages provided by a huge variety of developers. Sure, the packages may all be uploaded and indexed in one place, but that doesn't mean that any given author knows about any other given package. And then there's what the Perl world calls DarkPAN: the uncountable lines of code which probably exist, but which we have no insight into because they are locked away on proprietary servers and source code repositories. Decentralization can only be avoided when you control absolutely all of the lines of code in your application.. but in that case, you hardly need a package manager, do you? (By the way, my industry friends tell me this is basically mandatory for software projects beyond a certain size, like the Windows operating system or the Google Chrome browser.)

Decentralized systems are hard. Really, really hard. Unless you design your package manager accordingly, your developers will fall into dependency hell. Nor is there a one "right" way to solve this problem: I can identify at least three distinct approaches to the problem among the emerging generation of package managers, each of which has their benefits and downsides.

Pinned versions. Perhaps the most popular school of thought is that developers should aggressively pin package versions; this approach advocated by Ruby's Bundler, PHP's Composer, Python's virtualenv and pip, and generally any package manager which describes itself as inspired by the Ruby/node.js communities (e.g. Java's Gradle, Rust's Cargo). Reproduceability of builds is king: these package managers solve the decentralization problem by simply pretending the ecosystem doesn't exist once you have pinned the versions. The primary benefit of this approach is that you are always in control of the code you are running. Of course, the downside of this approach is that you are always in control of the code you are running. An all-to-common occurrence is for dependencies to be pinned, and then forgotten about, even if there are important security updates to the libraries involved. Keeping bundled dependencies up-to-date requires developer cycles--cycles that more often than not are spent on other things (like new features).

A stable distribution. If bundling requires every individual application developer to spend effort keeping dependencies up-to-date and testing if they keep working with their application, we might wonder if there is a way to centralize this effort. This leads to the second school of thought: to centralize the package repository, creating a blessed distribution of packages which are known to play well together, and which will receive bug fixes and security fixes while maintaining backwards compatibility. In programming languages, this is much less common: the two I am aware of are Anaconda for Python and Stackage for Haskell. But if we look closely, this model is exactly the same as the model of most operating system distributions. As a system administrator, I often recommend my users use libraries that are provided by the operating system as much as possible. They won't take backwards incompatible changes until we do a release upgrade, and at the same time you'll still get bugfixes and security updates for your code. (You won't get the new hotness, but that's essentially contradictory with stability!)

Embracing decentralization. Up until now, both of these approaches have thrown out decentralization, requiring a central authority, either the application developer or the distribution manager, for updates. Is this throwing out the baby with the bathwater? The primary downside of centralization is the huge amount of work it takes to maintain a stable distribution or keep an individual application up-to-date. Furthermore, one might not expect the entirety of the universe to be compatible with one another, but this doesn't stop subsets of packages from being useful together. An ideal decentralized ecosystem distributes the problem of identifying what subsets of packages work across everyone participating in the system. Which brings us to the fundamental, unanswered question of programming languages package management:

How can we create a decentralized package ecosystem that works?

Here are a few things that can help:

1. Stronger encapsulation for dependencies. One of the reasons why dependency hell is so insidious is the dependency of a package is often an inextricable part of its outwards facing API: thus, the choice of a dependency is not a local choice, but rather a global choice which affects the entire application. Of course, if a library uses some library internally, but this choice is entirely an implementation detail, this shouldn't result in any sort of global constraint. Node.js's NPM takes this choice to its logical extreme: by default, it doesn't deduplicate dependencies at all, giving each library its own copy of each of its dependencies. While I'm a little dubious about duplicating everything (it certainly occurs in the Java/Maven ecosystem), I certainly agree that keeping dependency constraints local improves composability.
2. Advancing semantic versioning. In a decentralized system, it's especially important that library writers give accurate information, so that tools and users can make informed decisions. Wishful, invented version ranges and artistic version number bumps simply exacerbate an already hard problem (as I mentioned in my previous post). If you can enforce semantic versioning, or better yet, ditch semantic versions and record the true, type-level dependency on interfaces, our tools can make better choices. The gold standard of information in a decentralized system is, "Is package A compatible with package B", and this information is often difficult (or impossible, for dynamically typed systems) to calculate.
3. Centralization as a special-case. The point of a decentralized system is that every participant can make policy choices which are appropriate for them. This includes maintaining their own central authority, or deferring to someone else's central authority: centralization is a special-case. If we suspect users are going to attempt to create their own, operating system style stable distributions, we need to give them the tools to do so... and make them easy to use!

For a long time, the source control management ecosystem was completely focused on centralized systems. Distributed version control systems such as Git fundamentally changed the landscape: although Git may be more difficult to use than Subversion for a non-technical user, the benefits of decentralization are diverse. The Git of package management doesn't exist yet: if someone tells you that package management is solved, just reimplement Bundler, I entreat you: think about decentralization as well!

### CompsciOverflow

How do we cascade two DFA's M1(Q1,S1,R1,F1,G1,qI1) and M2(Q2,S2,R2,F2,G2,qI2) such that the output of M1 is used as the input of M2 and the output of M2 is the output of the cascaded machine M? How can we define M in terms of M1 and M2?

I can tell that R = R2 and qI = qI1 but what about the others?

M(?,?,R2,?,?,qI1)

### TheoryOverflow

#### Algorithms for online clique detection

Are there any algorithms which let you detect cliques when adding/deleting edges based on previously detected cliques? What would be the time/memory complexity of this approach?

### StackOverflow

#### Using scalaz-stream to calculate a digest

So I was wondering how I might use scalaz-stream to generate the digest of a file using java.security.MessageDigest?

I would like to do this using a constant memory buffer size (for example 4KB). I think I understand how to start with reading the file, but I am struggling to understand how to:

1) call digest.update(buf) for each 4KB which effectively is a side-effect on the Java MessageDigest instance, which I guess should happen inside the scalaz-stream framework.

2) finally call digest.digest() to receive back the calculated digest from within the scalaz-stream framework some how?

I think I understand kinda how to start:

import scalaz.stream._
import java.security.MessageDigest

val f = "/a/b/myfile.bin"
val bufSize = 4096

val digest = MessageDigest.getInstance("SHA-256")

Process.constant(bufSize).toSource
.through(io.fileChunkR(f, bufSize))


But then I am stuck! Any hints please? I guess it must also be possible to wrap the creation, update, retrieval (of actual digest calculatuon) and destruction of digest object in a scalaz-stream Sink or something, and then call .to() passing in that Sink? Sorry if I am using the wrong terminology, I am completely new to using scalaz-stream. I have been through a few of the examples but am still struggling.

#### merging dictionaries in ansible

I'm currently building a role for installing PHP using ansible, and I'm having some difficulty merging dictionaries. I've tried several ways to do so, but I can't get it to work like I want it to:

# A vars file:
my_default_values:
key = value

my_values:
my_key = my_value

# In a playbook, I create a task to attempt merging the
# two dictionaries (which doesn't work):

- debug: msg="{{ item.key }} = {{ item.value }}"
with_dict: my_default_values + my_values

# I have also tried:

- debug: msg="{{ item.key }} = {{ item.value }}"
with_dict: my_default_values|union(my_values)

# I have /some/ success with using j2's update,
# but you can't use j2 syntax in "with_dict", it appears.
# This works:

- debug: msg="{{ my_default_values.update(my_values) }}"

# But this doesn't:

- debug: msg="{{ item.key }} = {{ item.value }}"
with_dict: my_default_values.update(my_values)


Is there a way to merge two dictionaries, so I can use it with "with_dict"?

### CompsciOverflow

#### Can coq express its own metatheory?

I'm learning about language metatheory and type systems, and am using coq to formalize my study. One of the things I'd like to do is examine type systems that include dependent types, which I understand is very involved: being able to rely on coq would be invaluable.

Since this type system feature (and other, simpler ones) brings the expressive power of my studied system closer to coq's own, I worry that I might run into a bootstrapping problem that might not reveal itself until much later. Perhaps someone here can address my fears before I set out.

Can coq express its own metatheory? If not, can it still express simpler systems that include common forms of dependent typing?

### StackOverflow

#### Templating multiple yum .repo files with Ansible template module

I am attempting to template yum .repo files. We have multiple internal and external yum repos that the various hosts we manage may or may not use.

I want to be able to specify any number of repos and what .repo file they will be templated in. It makes sense to group these repos in the same .repo file where they have a common purpose (e.g. all centos repos)

I am unable to determine how to combine ansible, yaml and j2 to achieve this. I have tried using the ansible 'with_items', 'with_subelements' and 'with_dict' unsuccessfully.

YAML data

yum_repo_files:
- centos:
- name: base
baseurl: http://mirror/base
- epel:
- name: epel
baseurl: http://mirror/epel


- name: create .repo files
template: src=yumrepos.j2 dest="/etc/yum.repos.d/{{ item }}.repo"
with_items: yum_repo_files


j2 template

{% for repofile in yum_repo_files.X %} {# X being the relative index for the current repofile, e.g. centos = 0 and epel = 1 #}
{% for repo in repofile %}
name={{ repo.name }}
baseurl={{ repo.baseurl }}
{% endfor %}
{% endfor %}


### CompsciOverflow

#### What is the notation for bounding running time in worst case with concrete example resulting in that worst case running time

I know that Big O is used to bound worst case running time. So an algorithm with running time $O(n^5)$ means its running time in worse case is less than $n^5$ asymptotically.

Similarly, one can say that for example merge sort's running time is $O(n^2)$ which is correct. But we know that there is a better bound for it: $O(n\log n)$. Technically speaking, one can say that every polytime algorithm has running time $O(2^n)$. This is correct, but not useful.

So my question is: what is the notation used for the case of worst case running time such that there exists an input in which the worst case running time happens.

In the merge sort example, one cannot construct an input example so that merge sort would take $n^2$ comparisons, but one can construct an example that requires the number of comparisons being about $n\log n$.

### StackOverflow

#### Trying to use "with_items" and "when" in a Ansible playbook to clone a repo

Hello everyone and thanks for stepping by. As the title says, i'm trying to use those Ansible modules as follow. I want to clone a Wordpress repo depending if a variable is "yes" or "no".

This is my main Ansible playbook (using it with Vagrant through vagrant --provision). I'll provide just relevant parts.

vars:
nginx_server_blocks:
- { server_name: "dev.simple-site.io", document_root: "simple-site", wordpress: "no" }
- { server_name: "dev.wordpress-site.io", document_root: "wordpress-site", wordpress: "yes" }
- name: clone Wordpress repo
git: repo=git:https://github.com/WordPress/WordPress.git
dest=/var/www/{{ item.document_root }}
with_items: nginx_server_blocks
when: item.wordpress == "yes"


When i run vagrant provisioni get this error:

fatal: [default] => failed to parse: SUDO-SUCCESS-rtlizwskstbaxddabxlgqtxxqzambxnh
Traceback (most recent call last):
File "/home/vagrant/.ansible/tmp/ansible-tmp-1408592922.35-152092658109200/git", line 2119, in <module>
main()
File "/home/vagrant/.ansible/tmp/ansible-tmp-1408592922.35-152092658109200/git", line 524, in main
File "/home/vagrant/.ansible/tmp/ansible-tmp-1408592922.35-152092658109200/git", line 1986, in add_git_host_key
fqdn = get_fqdn(module.params['repo'])
File "/home/vagrant/.ansible/tmp/ansible-tmp-1408592922.35-152092658109200/git", line 2022, in get_fqdn
if "@" in result:
TypeError: argument of type 'NoneType' is not iterable

FATAL: all hosts have already failed -- aborting


Any ideas of the error? i've google it and read the ansible docs about using whenand with_itemsbut no luck.

If helps, my host machine is a mac and the guest is ubuntu 14.04 through Vagrant. Ansible was installed via pip and it's 1.7.

#### Uniqueness of persistenceId in akka-persistence

I'm using the scala api for akka-persistence to persist a group of actor instances that are organized into a tree. Each node in the tree is a persistent actor and is named based on the path to that node from a 'root' node. The persistenceId is set to the name. For example the root node actor has persistenceId 'root'. The next node down has persistenceId 'root-europe'. Another actor might have persistenceId 'root-europe-italy'.

The state in each actor includes a list of the names of its children. E.g. the 'root' actor maintains a list of 'europe', 'asia' etc as part of its state.

I have implemented snapshotting for this system. When the root is triggered to snapshot, it does so and then tells each child to do the same.

The problem arises during snapshot recovery. When I re-create an actor with persistenceId = 'root' (by passing in the name as a constructor parameter), the SnapshotOffer event received by that actor is wrong. It is, for example, 'root-europe-italy....'. This seems like a contradiction of the contract for persistence, where the persistenceId identifies the actor state to be recovered. I got around this problem by reversing the persistenceId of node actors (e.g. 'italy-europe-root') so this seems to be something related to the way files are retrieved by the persistence module. Note that I tried other approaches first, for example I used a variety of separators between the node names, or no separator at all.

Has anyone else experienced this problem, or can an akka-persistence developer help me understand why this might have happened?

BTW: I am using the built-in file-based snapshot storage for now.

Thanks.

### StackOverflow

#### How to print source code of "IF" condition in "THEN"

I would like to print Scala source code of IF condition while being in THEN section.

Example: IF{ 2 + 2 < 5 } THEN { println("I am in THEN because: " + sourceCodeOfCondition) }

Let's skip THEN section right now, the question is: how to get source code of block after IF?

I assume that IF should be a macro...

Note: this question is redefined version of Macro to access source code of function at runtime where I described that { val i = 5; List(1, 2, 3); true }.logValueImpl works for me (according to other question Macro to access source code text at runtime).

### QuantOverflow

#### M&A hedging an equity portfolio against an index

Quick Note

This question was already posted under the userID user8170. Reason being I could not access my account. Now I am able to login to my account I am reposting the question here and will delete it from the profile user8170 (no comments or answers were posted anyway).

Question

I am trying to run a simple back test on a M&A strategy.

The idea is to buy the target company for the length of the deal and obviously hope to see a profit. The weight given to each deal is decided by the size of the deal.

Some of the deals are part cash, part equity in my study. I have a field in my data called 'Stock Exchange Ratio - Buyer Shares' (SER). This field is defined as the number of shares being issued by the acquirer to the target.

So for example if the acquirer called ABC is buying the target company called TAR in a part cash, part stock deal and the SER is 0.8. Then investors holding TAR will receive 0.8 shares of ABC for every TAR share they hold.

So when I have deals that are not 100% cash I will get extra equity exposure (from the acquirer) that I need to hedge as I understand it.

Rather than short every acquiring company and partly for simplicity I am going to short the MSCI World Index. I do not know how to calculate how much I need to hedge my portfolio against the index though? I have all the beta's for the acquiring companies.

  Portfolio

Acquirer Target  Deal Size    Weight      Stock Exchange Ratio - Buyer Shares
ABC      DEF     $1,000m 50% 0 MNO LMN$600m        30%         0.6
GHI      QRS     $400m 20% 2.5  Update The beta's for the 3 companies above are, ABC 0.93 MNO 1.11 GHI 1.14  ### Lobsters #### fork() can fail: this is important ### /r/compsci #### Security Analysis of a Full-Body Scanner ### StackOverflow #### Merging Maps using aggregate For any given collection of Map, for instance, val in = Array( Map("a" -> 1, "b" -> 2), Map("a" -> 11, "c" -> 4), Map("b" -> 7, "c" -> 10))  how to use aggregate on in.par so as to merge the maps into Map ( "a" -> 12, "b" -> 9, "c" -> 14 )  Note Map merging has been asked multiple times, yet looking for a solution with aggregate on parallel collections. Many Thanks ### TheoryOverflow #### What's some good introductory books on type theory? I'm recently studying Haskell and programming languages. Could someone recommend some books on type theory? ### Fred Wilson #### On SoundCloud Today, our portfolio company SoundCloud is announcing its content partners program, called On SoundCloud. For creators, there are three offerings, Partner, Pro, and Premier. Anyone can be a Partner. For a small monthly fee, you can upgrade to Pro. And if you are really serious, then you can become Premier and make money on SoundCloud. For listeners, there will be two tiers. A free, advertising supported offering that values artists. As Alex Ljung, founder and CEO of SoundCloud says here: Every time you see or hear an ad, an artist gets paid There will also be a subscription offering that will be ad free and offer other features for listeners. For brands, SoundCloud becomes a popular social platform where they can engage with creators and listeners. Here’s more on SoundCloud’s offerings for brands. Here’s the thing that many people miss about SoundCloud. It’s not like iTunes, or Spotify, or Pandora. It’s a peer network with a social architecture that emphasizes engagement and sharing. Like Twitter and Tumblr and a number of other popular social platforms, SoundCloud treats everyone as peers in its network. My profile is almost identical to an artist’s profile on SoundCloud. I can do the same things they can do and they can do the same things I can do. The same is true of a brand’s profile. This social architecture encourages engagement, sharing, commenting, and favoriting. It’s like the artists, listeners, and brands are all hanging out together at one big party. These social peer networks treat advertising very differently. The ads are native. On Twitter, the advertising is a Tweet. On Tumblr, the advertising is a post. On SoundCloud, the advertising is a track. You see the ads in your feed and you choose to engage with them if they are inviting. In the best case, you enjoy them so much that you favorite or reblog/retweet them. And brands can sponsor/promote tracks from other users. Think of Red Bull sponsoring and promoting artists on SoundCloud. The New York Times has an article today about On SoundCloud. It covers all the challenges that SoundCloud has overcome in getting to this place. It’s been a ton of work for the team at SoundCloud to get this launched, and there is certainly a lot more ahead of them as they undertake to get every artist On SoundCloud. I am very optimistic that will happen because this network of 175mm mobile listeners all over the world connected together and sharing the audio they love with each other is too powerful to ignore. ### StackOverflow #### How to RegExp file with Spark? I have UDP_file.txt containing: 2014-03-02 07:59:37;source-address=123.235.78.125 source-port=1780 2014-03-02 07:59:37;source-address=123.235.132.181 source-port=56399 2014-03-02 07:59:37;source-address=123.234.141.253 source-port=49170 2014-03-02 07:59:37;source-address=123.234.104.225 source-port=39123 2014-03-02 07:59:37;source-address=123.234.104.225 fake-port=0000  What I need to do is : • load file, • RegExp it, • lines than match pattern save in file 'good_records.txt', • lines than don't match pattern save in file 'bad_records.txt' . val file_in = sc.textFile("UPD_file.txt") val FullName = """(^.{19}).+source-address=([^"]+) source-port=([^"]+)""".r  When I test pattern on one row it works: scala> val FullName(ip,sa,sp) = "2014-03-02 07:59:37;source-address=10.114.104.225 source-port=3912 ip: String = 2014-03-02 07:59:37 sa: String = 10.114.104.225 sp: String = 39123  or scala> "2014-03-02 07:59:37;source-address=10.115.78.125 source-port=1780" match { case FullName(ip,sa,sp) } (2014-03-02 07:59:37,10.115.78.125,1780)  But I have no idea how to use it on each line of a loaded file. file_in.AndWhatNow?  Can you help? I will be grateful for any suggestions. Pawel ### CompsciOverflow #### Deciding if a finite automata accepts strings of any length Question is you're given a DFA. Give an algorithm which tells you whether strings of all lengths$n\in \mathbb{N}$are acceptable or not. What I doing was, I have algorithm to count the number of all strings of some fixed length$n$. Now let there are$k$states. Suppose we got a positive result (i.e the number of strings is$> 0$) for all$n$up to$k$. Then check$k+1$: if it gives a positive result then we can say, at least one state is visited twice by that path of length$k+1$. That means we'll get$x$such that all of$k+1+nx$for all$n\geq 0$will get accepted if$x=1$then done. If not then check again$k+2$again we'll get a$y$like that. So for all$n>k$we're getting APs of lengths which are acceptable but then can we say after some finite state we can say all numbers are accepted ? ### UnixOverflow #### How do I send a shutdown event to a QEMU guest (OpenBSD)? I'm using virtualisation solely to install OpenBSD onto the bare hardware, and during the installation, the redirection to the serial port didn't get configured, so, I ended up with the system running, but no way to login and do a clean shutdown. kvm -m 6144 -smp 4 -drive file=/dev/sda,if=ide \ -drive file=/dev/sdb,if=scsi -drive file=/dev/sdc,if=scsi \ -cdrom install52.iso -boot d -nographic  How can I send a shutdown event to this session? AFAIK, Ctrl-a x as shown here or a pkill kvm would not do a clean shutdown yet. Alternatively, how can I switch from the -nographic mode into the -curses mode ? ### StackOverflow #### Scala case class, can't override constructor parameter I can't make to work simple stuff. Here is my case class:  case class MyCaseClass(left:Long, right: Long = Option[Long], operator: Operator = Option[Operator]){ def inRange(outer: Long) = outer >= left && outer <= right }  And I try to create it: val instance = MyCaseClass(leftValue)  And I do get comple error: Unspecified value parameters right, operator why? I've tried 100500 suggesntions from SO and I can't make it work. I just want to have 2 constructors for a case class: with 3 params and wih 1 param... UPD: this works as Ende Neu suggested: case class MyCaseClass(left:Long, right: Option[Long] = None, operator: Option[Operator] = None)  The problem is that I have to wrap right, Operator into option. I'm not good at "implicit" stuff of Scala, but It look to me that I can do wrapping/unwrapping implicitly...? right? UPD: Allows you to avoid Option wrapping for right and operator parameters. Details are in answer. object MyCaseClass { def apply(left: Long, right: Long, operator: Operator) = new MyCaseClass(left, Option(right), Option(operator)) }  #### Can I create a default OPTIONS method directive for all entry points in my route? I don't want to explicitly write: options { ... }  for each entry point / path in my Spray route. I'd like to write some generic code that will add OPTIONS support for all paths. It should look at the routes and extract supported methods from them. I can't paste any code since I don't know how to approach it in Spray. The reason I'm doing it is I want to provide a self discoverable API that adheres to HATEOAS principles. ### QuantOverflow #### hedging with a 3 month fx forward every month I think this is a bit odd question. Let us say I want to hedge my fx exposure every month but using 3 month forwards . How can I do that ? Is it not easy just to use 1 month forwards ? I recalculate my expsoure every month. In other words let us say I am us investor but I get my profits from a euro company. So every month I calcuate the expected return I might get the next month and do the hedge accordingly. This is straight forward with a one month forward but assuming there exists only 3 month forward contracts in the market(hypothetical) how can one do the hedging ? ### StackOverflow #### how to make 'while' return a collection? [duplicate] This question already has an answer here: I ran into a situation where I needed while to output a collection. Here is an example: (reading a JDBC ResultSet) What I would have liked val rs: java.sql.ResultSet = ??? val cols = while (rs.next()) for (i <- 1 to numberOfColumns) yield rs.getString(i)  What I ended up doing: val rs: java.sql.ResultSet = ??? var cols: List[Seq[String]] = Nil while (rs.next()) cols ::= (for (i <- 1 to numberOfColumns) yield rs.getString(i))  Is there any way to make while return a List (or some other collection)? I want to avoid using a mutable variable. ### CompsciOverflow #### How to find degree of polynomial represented as a circuit? I know very little about arithmetic circuits, so maybe it is something well-known. Given a small circuit consisted of$\{1,x,-,+,*\}$defining one variable polynomial. Let be additionally known that degree of this polynomial does not exceed$d$and all the coefficients are small. I wonder if exists a fast way to find actual degree of this polynomial? Using FFT and some small field, one can do it in$O(d)$time (regardless$polylog$factors), but this time is sufficiently to reconstruct the the entire polynomial, so I hope computing degree only can be done more efficient. ### StackOverflow #### Jumping forward with the continuation monad It is possible to jump backward in a program with the continuation monad: {-# LANGUAGE RecursiveDo #-} import Control.Monad.Fix import Control.Monad.Trans.Cont setjmp = callCC (\c -> return (fix c)) backward = do l <- setjmp -- some code to be repeated forever l  But when I try to jump forward, it is not accepted by GHC: forward = mdo l -- some dead code l <- setjmp return ()  This does not work because there is no instance for MonadFix (ContT r m) for the continuation monad transformer ContT defined in Control.Monad.Trans.Cont. See Section 5.1 of Levent Erkok's thesis for further details. Is there a way to encode forward jump without value recursion for the continuation monad? Is there an alternative definition of ContT that has an instance for MonadFix (ContT r m)? There is an unpublished draft by Magnus Carlsson that makes such proposal but I am not sure what to do of it in my case. ### DataTau #### Why I Love Data Engineering ### StackOverflow #### Issues while setting up lightweight modular staging I'm trying to get started with the examples here. I'm trying to set up my dev environment using Scala IDE (Eclipse). So far, 1. I have downloaded lms, built it using sbt and added the generated jar library to my eclipse project. 2. I'm trying to write this bit of the code sample provided. val snippet = new DslDriver[Array[Int],Array[Int]] { def snippet(v: Rep[Array[Int]]) = {// Continues  However, DslDriver isn't found inside the scala.virtualization.lms package. The library is being found so it's not a problem with the build path. 1. I have also installed the scala-virtualized plugin to my Scala IDE. 2. Perhaps this is an eclipse issue where it can't find the necessary classes? Should I switch to coding using an editor and building using sbt? Any help would be appreciated. Thanks in advance #### Is it possible to write a varnish using zeromq? I am now working on a VOD project and I want to try to build a proxy server like varnish (Reverse proxy). I know it is not easy at all and I am just thinking about the “feasibility”. I’ve done some researches and I found a powerful messaging library called “Zeromq”. Certainly, zeromq is very useful for writing a server but I am not sure whether it is also useful for writing a varnish, or a proxy server that is similar to varnish? I realized that there are a few functions/ classes in zeromq which is related to proxy server such as “zmq_proxy” but I am not sure whether it is something that I really need. I want a proxy server that can cache the video content in the memory from the root server and then send the stream back to the client. (It would be much more better if the library have some built-in thread-handling classes/ functions.) Will zeromq store the content into the main memory? Or is there any a way it can? Or do you guys have any other powerful library for writing a varnish server? may be ACE or …? Or should I just customize the varnish (eg: customize my own caching policy) which I think is less fun instead of writing my own varnish server? Thanks in advance. ### /r/netsec #### UPS confirms security breach at multiple stores, some dating back to January ### CompsciOverflow #### Check whether it is possible to turn one BST into another using only right-rotations Given two binary search trees T1 and T2, if it is possible to obtain T2 from T1 using only right-rotation, we say that T1 can be right-converted to T2. For example, given three binary search tree T1, T2 and T3: For example, given three binary search tree T1, T2 and T3:  Tree 1 Tree 2 Tree 3 a b a / / \ / b d a d / \ / \ d c c b \ c  T2 can be right-converted from T1, T3 can be right-converted from T1. But T3 cannot be right-converted from T2, T2 cannot be right-converted from T3. Given two arbitrary BST, how to determine whether they are right-convertible with respect to each other? ### StackOverflow #### type inference is smart enough to figure out the type when the type is operated with other type Assume this type inference code for infer Element in the List, def doStuff[A](x: List[A]) = x // ignore the result doStuff(List(3)) // I dont need to speicify the type Int here  However, if the type A is operated with other type, the type inference is not working, I have to specify the type. def doStuff[A, B](x: List[A], f: (B, A) => B) = { } doStuff(List(3), (x: String, y) => x) //compilation failed, missing parameter type doStuff[Int, String](List(3), (x, y) => x) //compilation fine.  May I know why is that ? Many thanks in advance #### delta-time Versus raw-delta-time (difference?) I'm implementing FPS style mouselook and keyboard controls. Using delta-time to mult stuff. And i can choose between delta and raw delta. What is the difference? About non-raw delta DOCS say, "Might be smoothed over n frames vs raw". What will i do to my code/game if i choose to use non smooth over smooth? Since the docs say "Might be smoothed"... now thats not fun, that means a bunch of questions. I'm looking at differend ways to "smooth" the transforms. EDIT: I think the real question is that, if smoothed delta is a type of calculation based on raw delta. And while i find some people saying that smooth delta is giving them weird results. Then would i be better of writing my own calculation using raw delta... ### UnixOverflow #### Secure FOSS alternative to Skype on Linux & OpenBSD? Criteria: • Makes audio/video calls • Encrypts the whole traffic (using good encryption) • Is cross-platform (including Windows 7, etc.) • Runs on modern Linux distributions (Fedora, Ubuntu, etc.) • Runs on OpenBSD Does anybody know a good Free and Open-Source alternative to Skype? ### StackOverflow #### lein deploy clojars stuck I am on windows and attempting to deploy to clojars. I followed the leiningen gpg guide and installed Gpg4win as suggested there; I generated using the default encryption, set passphrase, in short, followed the guide 100%. I have gpg in my path. I am able to follow the guide for generating and publishing my keys. I have copied the public key over to clojars. I have followed the leiningen deployment guide to run lein deploy clojars. However, the project just does this and seems to hang forever: C:\project\project>lein deploy clojars No credentials found for clojars (did you mean lein deploy clojars?) See lein help deploy for how to configure credentials. Username: myemail@mail.com Password: Wrote C:\project\project\pom.xml Created C:\project\project\target\project-0.1.1.jar  And that's it. ### /r/netsec #### A Hacker Is Trying To Sell Counterfeit US Money On Reddit ### StackOverflow #### Read entire file in Scala? What's a simple and canonical way to read an entire file into memory in Scala? (Ideally, with control over character encoding.) The best I can come up with is: scala.io.Source.fromPath("file.txt").getLines.reduceLeft(_+_)  or am I supposed to use one of Java's god-awful idioms, the best of which (without using an external library) seems to be: import java.util.Scanner import java.io.File new Scanner(new File("file.txt")).useDelimiter("\\Z").next()  From reading mailing list discussions, it's not clear to me that scala.io.Source is even supposed to be the canonical I/O library. I don't understand what its intended purpose is, exactly. ... I'd like something dead-simple and easy to remember. For example, in these languages it's very hard to forget the idiom ... Ruby open("file.txt").read Ruby File.read("file.txt") Python open("file.txt").read()  ### TheoryOverflow #### Partition planar graph into connected subgraphs of equal size Work Jünger, Michael, Gerhard Reinelt, and William R. Pulleyblank. "On partitioning the edges of graphs into connected subgraphs." Journal of graph theory 9.4 (1985): 539-549. states that for 4-edge-connected graph one can partition its edges into disjoint subsets of size$r$, such that each subset form a connected subgraph. I wonder if the same kind of statement could be formulated for partition of vertices. For what kind of graphs one find partition of vertices into disjoint subsets of size$\approx r$, such that each subset form a connected subgraph (for each r)? I'm particularly interested in planar graphs, but would be happy with any class. I can soften some conditions (it will still meet my needs): for what graph classes existence of partition into less than$\frac{\alpha n}{r}$connected subgraphs of size less than$r$is guaranteed (for some$\alpha$)? ### QuantOverflow #### Doubt on risk cost criterion I want to minimize some kind of risk sensitive cost. But, I am confused what cost criterion should I use. I am aware of only expected exponential utility. I want to know what are the other such measures in literature and which one among them will be good and in which sense it is better than the other. ### StackOverflow #### No operations allowed after connection closed in play framework the code works fine but i am noticing that sometimes it is giving error com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.  i am using a simple query and always it comes here(till now). def result(id: String,mark:String) = { DB.withConnection { implicit c => val result = SQL("SELECT mark FROM subject WHERE id={id}").on("id" -> id).apply().head if (result[String]("mark").equals(mark)) { Map("result" -> true) } else { Map("result" -> false) } } }  i will provide more information if needed, because i also not seems any error in this code. did i have to do something with this? application.conf contexts { simple-db-lookups { fork-join-executor { parallelism-factor = 10.0 } } expensive-db-lookups { fork-join-executor { parallelism-max = 4 } } db-write-operations { fork-join-executor { parallelism-factor = 2.0 } } expensive-cpu-operations { fork-join-executor { parallelism-max = 2 } } }  I am using scala 2.10 with play framework 2.2 ### DataTau #### Color References and Resources for Visualization Professionals ### Planet FreeBSD #### FreeBSD ports tree was born twenty years ago, let’s celebrate! It all started with this commit from Jordan Hubbard on August 21, 1994: ### QuantOverflow #### Testing the validity of a factor model for stock returns Consider the following m regression equation system: $$r^i = X^i \beta^i + \epsilon^i \;\;\; \text{for} \;i=1,2,3,..,n$$ where$r^i$is a$(T\times 1)$vector of the T observations of the dependent variable,$X^i$is a$(T\times k)$matrix of independent variables,$\beta^i$is a$(k\times1)$vector of the regression coefficients and$\epsilon^i$is the vector of errors for the$T$observations of the$i^{th}$regression. My question is: in order to test the validity of this model for stock returns (i.e. the inclusion of those explanatory variables) using AIC or BIC criterion, should these criterion be computed on a time-series basis (i.e. for each stock), or on a cross-sectional basis (and then averaged over time)? ### TheoryOverflow #### LEXICAL ANALYSIS Respected Sir, Which is the lexical analyzer used for compiling C programs.I think flex and lex are used to give us a view of how lexical analysis takes place.But I searched the internet but couldn't get anything related to lexical analyzer used in C. I hope you would clarify my doubts regarding this topic. Thanks, Justin ### CompsciOverflow #### OpenCV: How to focus camera only on required area [on hold] I want to detect the face using VJ's algorithm in OpenCV and it's working fine, but I want to focus only on the region where face is detected not any other thing form the video stream. ### StackOverflow #### why case class can be used as a function in the argument Occasionally, I found an interesting feature of case class. The foo needs a function which 3 Int to a case class, The code looks like this: case class Whatever(a: Int, b: Int, c: Int) def foo(f: (Int, Int, Int) => Whatever) = f(1,2,3).c foo(Whatever) //compilation fine, scala complier is powerful ...........  If Whatever is normal class, obviously, the compilation will fail. Can someone explain why case class can be used this way, I suspect it is the reason of factory apply method, but I am not sure. Also, if it is a normal class, is it possible to use it this way as case class. ### /r/freebsd #### FreeBSD ports tree 20th anniversary ### CompsciOverflow #### Regular expression (ab U a)* to NFA with two states (Sipser)? In the 3rd edition of Sipser's Introduction to the Theory of Computation (example 1.56, p.68), there is a step-by-step procedure to transform (ab U a)* into a NFA. And then the text ends with: "In this example, the procedure gives an NFA with eight states, but the smallest equivalent NFA has only two states. Can you find it?" Nope. I can't. After a good deal of head scratching, I've convinced myself that it's not doable. But being a novice, I'm probably wrong. Can anyone help? Thanks! ### Planet Theory #### TR14-110 | Separation between Estimation and Approximation | Uriel Feige, Shlomo Jozeph We show (under standard assumptions) that there are NP optimization problems for which estimation is easier than approximation. Namely, one can estimate the value of the optimal solution within a ratio of$\rho$, but it is difficult to find a solution whose value is within$\rho$of optimal. As an important special case, we show that there are linear programming relaxations for which no polynomial time rounding technique matches the integrality gap of the linear program. ### /r/freebsd #### FreeBSD Foundation August Update (PDF) ### /r/compsci #### Why Everybody Should Learn Coding ### StackOverflow #### Fastest way to check list of integers against a list of Ranges in scala? I have a list of integers and I need to find out the range it falls in. I have a list of ranges which might be of size 2 to 15 at the maximum. Currently for every integer,I check through the list of ranges and find its location. But this takes a lot of time as the list of integers I needed to check includes few thousands. //list of integers val numList : List[(Int,Int)] = List((1,4),(6,20),(8,15),(9,15),(23,27),(21,25)) //list of ranges val rangesList:List[(Int,Int)] = List((1,5),(5,10),(15,30)) def checkRegions(numPos:(Int,Int),posList:List[(Int,Int)]){ val loop = new Breaks() loop.breakable { for (va <- 0 until posList.length) { if (numPos._1 >= posList(va)._1 && numPos._2 <= (posList(va)._2)) { //i save "va" loop.break() } } }  } Currently for every integer in numList I go through the rangesList to find its range and save its range location. Is there any faster/better way approach to this issue? Update: It's actually a list of tuples that is compared against a list of ranges. #### Is it great to make classes as functions, and declare parameters type with function types? I'm working on a scala project, and my colleague who prefers functional style and proposes a way to organize code: Define classes as functions Here is a sample: class FetchFeed extends (String => List[Feed]) { def apply(url:String):List[Feed] = ??? }  When other class needs this class, it will be declared using the type String => List[Feed] class MyWork(fetchFeed: String => List[Feed])  Then in some place, pass a FetchFeed to it: val fetchFeed = new FetchFeed val myWork = new MyWork(fetchFeed)  The pros is that we can easily mock the FetchFeed by passing a function: val myWork = new MyWork(_ => List(new Feed))  The syntax is simple and easy to read. But the cons is that, when I see the declaration of MyWork: class MyWork(fetchFeed: String => List[Feed])  It's hard to see which class will be passed in, even the IDE can't help me. We need to search extends (String => List[Feed]) in the codebase, or find the place to initialize the new MyWork. And if there is another class which extends String => List[Feed] but which is never used in MyWork, it often will confuse me. But if we declare it with the real type: class MyWork(fetchFeed: FetchFeed)  It will be easier to jump to the declarations directly. But with this case, we can't pass functions directly, instead, we need to: val fetchFeed = mock[FetchFeed] fetchFeed.apply(any[String]) returns List(new Feed) val myWork = new MyWork(fetchFeed)  I'm struggling with the two solutions. Is it a common pattern like this when write code in functional style? Is there any open-source projects take this style that I can get some ideas from? ### QuantOverflow #### Infinite autocorrelation - Unit root? I have a time series of gold prices, on which I want to build an ARIMA model. The series is autocorrelated and if I can difference as often as I want, it always is. First: data: d1gold Dickey-Fuller = -18.5829, Lag order = 19, p-value = 0.01 alternative hypothesis: stationary Second: data: d2gold Dickey-Fuller = -32.6297, Lag order = 19, p-value = 0.01 alternative hypothesis: stationary .. and so on. What can I do to fit the data in an ARIMA model? Data: https://drive.google.com/file/d/0B7cBu_0IHA17a1lQUlpsS1BJXzg/edit?usp=sharing Best Regards Erik ### StackOverflow #### Error using spray-aws to connect to DynamoDB under spray framework I am writing a new server use scala + akka + spray and I need to connect to DynamoDB in AWS. I have did some research and find your lib 'spray-aws'. But when I try to use it, I got some error.. scala version 2.11.2, sbt version should be 0.13.1 $ sbt
> re-start

[info] Compiling 6 Scala sources to /home/ubuntu/dc-judi-server-scala/target/scala-2.11/classes...
[error] /home/ubuntu/dc-judi-server-scala/src/main/scala/com/example/Boot.scala:13: object dynamodb is not a member of package com.sclasen.spray.aws
[error] import com.sclasen.spray.aws.dynamodb
[error]        ^
[error]   val props = DynamoDBClientProps("xxx", "yyy", Timeout(100 seconds), dbsystem, dbsystem)
[error]               ^
[error]   val client = new DynamoDBClient(props)
[error]                    ^
[error] three errors found
[error] (compile:compile) Compilation failed
[error] Total time: 25 s, completed Aug 21, 2014 4:30:42 AM


And attached are my build.sbt and Boot.scala I am pretty new to this framework and doesnt have much experience on it. Could you please help me and give me some insight..? Many thanks.

organization  := "com.example"

version       := "0.1"

scalaVersion  := "2.11.2"

scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")

libraryDependencies ++= {
val akkaV = "2.3.5"
val sprayV = "1.3.1"
Seq(
"io.spray"            %%  "spray-can"     % sprayV,
"io.spray"            %%  "spray-routing" % sprayV,
"io.spray"            %%  "spray-testkit" % sprayV  % "test",
"com.typesafe.akka"   %%  "akka-actor"    % akkaV,
"com.typesafe.akka"   %%  "akka-slf4j"    % akkaV,
"com.typesafe.slick"  %%  "slick"         % "2.1.0",
"com.typesafe.akka"   %%  "akka-testkit"  % akkaV   % "test",
"org.specs2"          %%  "specs2-core"   % "2.3.11" % "test",
"mysql"               %   "mysql-connector-java" % "5.1.32",
"ch.qos.logback"      %   "logback-classic" % "1.1.1",
"com.sclasen"         %   "spray-aws_2.11"  % "0.3.4"
)
}

resolvers ++= Seq(
"Spray repository" at "http://repo.spray.io",
"Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
)

Revolver.settings
organization  := "com.example"

version       := "0.1"

scalaVersion  := "2.11.2"

scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")

libraryDependencies ++= {
val akkaV = "2.3.5"
val sprayV = "1.3.1"
Seq(
"io.spray"            %%  "spray-can"     % sprayV,
"io.spray"            %%  "spray-routing" % sprayV,
"io.spray"            %%  "spray-testkit" % sprayV  % "test",
"com.typesafe.akka"   %%  "akka-actor"    % akkaV,
"com.typesafe.akka"   %%  "akka-slf4j"    % akkaV,
"com.typesafe.slick"  %%  "slick"         % "2.1.0",
"com.typesafe.akka"   %%  "akka-testkit"  % akkaV   % "test",
"org.specs2"          %%  "specs2-core"   % "2.3.11" % "test",
"mysql"               %   "mysql-connector-java" % "5.1.32",
"ch.qos.logback"      %   "logback-classic" % "1.1.1",
"com.sclasen"         %   "spray-aws_2.11"  % "0.3.4"
)
}

resolvers ++= Seq(
"Spray repository" at "http://repo.spray.io",
"Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
)

Revolver.settings


package com.example

import akka.actor.{ActorSystem, Props}
import akka.io.IO
import spray.can.Http
import akka.util.Timeout
import scala.concurrent.duration._
import com.example.config.Configuration
import com.example.service._

import com.sclasen.spray.aws.dynamodb
import concurrent.Await
import concurrent.duration._
import com.amazonaws.services.dynamodbv2.model.ListTablesRequest

object Boot extends App with Configuration {

// we need an ActorSystem to host our application in
implicit val system = ActorSystem("on-spray-can")

// A new actor system for host DB
import com.sclasen.spray.aws._

val dbsystem = ActorSystem("test")
val props = DynamoDBClientProps("xxx", "yyy", Timeout(100 seconds), dbsystem, dbsystem)
val client = new DynamoDBClient(props)
try {
val result = Await.result(client.sendListTables(new ListTablesRequest()), 100 seconds)
println(result)
result.getTableNames.size() should be >= 1
} catch {
case e: Exception =>
println(e)
e.printStackTrace()
}

// create and start our service actor
val service = system.actorOf(Props[CustomerServiceActor], "demo-service")

implicit val timeout = Timeout(5.seconds)
// start a new HTTP server on port 80 with our service actor as the handler
IO(Http) ? Http.Bind(service, host, port)
}


Update: I have try to change the import into : import com.sclasen.spray.aws._

but DynamoDBClientProps and DynamoDBClient still cannot be found...

#### Set fill-column for all cider output

I would like to set fill-column value for all cider output. I'm using:

(require 'cider)
(setq cider-show-error-buffer              nil
cider-docview-fill-column            76
cider-stacktrace-fill-column         76
nrepl-buffer-name-show-port          nil
cider-repl-display-in-current-window t
cider-repl-result-prefix             ";; => ")


But when I call meta, I get this:

user> (meta #'str)
;; => {:added "1.0", :ns #<Namespace clojure.core>, :name str, :file "clojure/core.clj", :static true, :column 1, :line 511, :tag java.lang.String, :arglists ([] [x] [x & ys]), :doc "With no args, returns the empty string. With one arg x, returns\n  x.toString().  (str nil) returns the empty string. With more than\n  one arg, returns the concatenation of the str values of the args."}


Everything in one line. I bet there is some variable cider-...-fill-column that will help me. I googled it, but found only cider-docview-fill-column and cider-stacktrace-fill-column.

### QuantOverflow

#### Cholesky Decomposition on Correlation Matrix for Correlated Asset Paths

I found a matlab example for modelling correlated asset paths: http://www.goddardconsulting.ca/matlab-monte-carlo-assetpaths-corr.html

In this model the author uses the matlab code chol() in order to calculate the cholesky decomposition on the correlation matrix. However, by default, chol(corr) returns the upper triangular matrix but in my understanding the lower triangular matrix is needed for generating correlated random numbers. This can be calculated by chol(corr,'lower'): http://www.mathworks.de/de/help/matlab/ref/chol.html

Now, is this simply a small error in the code example or did I misunderstand some theoretic basics?

Best

### CompsciOverflow

#### What is the definition of a $\Pi_1$-sentence?

What is meant when somebody says that a problem can be expressed as a $\Pi_1$-sentence? I know that for the arithmetical hierarchy, a $\Pi^0_1$-sentence is a sentence of the form $\forall n_1\forall n_2\dots\forall n_k \psi$ where $\psi$ is a formula in the language of Peano arithmetic with only bounded quantifiers. And for the analytical hierarchy, a $\Pi_1^1$-sentence is a sentence of the form $\forall X_1\forall X_2\dots\forall X_k \psi$ where $\psi$ is a formula in the language of second-order arithmetic with no set quantifiers.

I found the following definition for this notation in section "5 Proving Independence" of an article on the possibility of undecidability:

Let’s define a $\Pi_1$-sentence to be any sentence of the form, “For all $x$, $P (x)$,” where $P$ is function that can be proven to be recursive in Peano Arithmetic, PA.

People talking about $\Pi_1$- and $\Pi_2$-sentences sometimes refer to Shoenfield's absoluteness theorem, which seems to talk about $\Pi^1_2$-sentences, i.e. refers to the analytical hierarchy. Can I deduce from this that people talking about $\Pi_1$-sentences use $\Pi_1$ as a shorthand for $\Pi^1_1$ (because $x^1=x$...)? But the quoted definition looks much more like the condition from the arithmetical hierarchy to me, even so I'm not sure whether it is really equivalent to it.

#### Show that the language of words with even sum of positions of a letter is regular

Let $\Sigma=\{a,b\}$, and let $S(a)$ be sum of the positions of $a$ of string $S$. I want to prove $$L=\{S\in \Sigma^{*} \mid S(a)=0(\bmod 2)\}$$ is regular.

What I was thinking is to do somehow keep track of sum of positions of $a (\bmod 2)$ For that I was thinking to do like take set of states as $\{0,1\}\times \{0,1\} \times \{0,1\}$. And starting state $\{0,0,0\}$. My aim to is to keep track sum of positions of $a$ at first component. So starting from initial state, if it read consecutive $x,b$s then it will go to $(0,0,x(\bmod 2))$ then after reading $y,a$s it goes to $(xy+y(y+1)/2(\bmod 2),y(\bmod 2),x(\bmod 2)$ after reading $z,b$s it goes to $(xy+y(y+1)/2 \bmod 2),y(\bmod 2),(x+y)z+(x+y)(x+y)/2(\bmod 2)$ ... and so on. And set accepting state ${0}\times {0,1}\times {0,1}$. I believe its working but I don't understand how to define on each state.

#### Probabilistic hardness of approximation or solution of NP-hard optimization problems under a probabilistic generative model for input data

So in biology (DNA sequences), sequence alignment is a generalization of longest common subsequence where an alignment of two sequences is scored typically with a linear function of how many spaces are inserted into each sequence and how many times each possible pair of aligned characters appears in the alignment. Just like longest common subsequence, finding the optimal alignment of two strings under an arbitrary linear scoring scheme can be solved in quadratic time using dynamic programming. (Needleman-Wunsch algorithm). The longest subsequence problem and variants that use linear scoring schemes and ask for the optimal multiple sequence alignment are NP-hard when the number of input strings is not fixed.

However, in biology, there is a probabilistic generative model that generates related DNA sequences. Starting with an unknown root ancestor DNA sequence, bifurcations occur that create two daughter sequences (species) that are independently derived from the ancestral sequence by potentially adding some characters in random locations, deleting some characters, and changing some characters. Then the bifurcations continue with additional changes at each level until the modern day DNA sequences of extant species are obtained. Then we want to align the modern day species' sequences (e.g. find the longest common subsequence in the simplest case) without knowing the exact ancestral sequences. In this case, fossil records can help identify the bifurcation events and estimate the sequence mutation rates after each bifurcation. So a reasonable estimate of the generative model that generated the related modern day DNA sequences can sometimes be obtained.

Now, my question is, for such an NP-hard optimization problem with a well-defined probabilistic generative model that generates input data, has anyone studied the hardness of finding either an optimal or nearly optimal solution, where either the worst-case or expected running time depends on the parameters for the model that generates the input data? For example, if DNA mutation and insertion/deletion rates are very low for a particular group of species, then it should be fairly easy to get at least a nearly optimal alignment of all the DNA sequences using partial alignments and pruning and heuristics, without resorting to a full-blown exponential time solution.

#### Converting generalized NFAs to NFAs

I came across generalized nondeterministic finite automata (GNFAs) in Sipser's Introduction to the Theory of Computation. These are automata where transitions are labelled with regular expressions, rather than single symbols from the alphabet. I thought he would explain why GNFAs are allowed. I mean, an appropriate explanation would be that GNFAs are equivalent to NFAs, or GNFAs are equivalent to DFAs or some such argument. But I couldn't find any such explanation in the book.

Online, I read in this article that you can convert a GNFA to an NFA as follows:

For each transition arrow in the GNFA, we insert the complete automaton accepting the language generated by the transition arrow’s label as a “subautomaton;” this way, we can replace each regular expression by a set of states and character transitions

How is the automaton inserted?

Let's say we have a GNFA with an arrow going from state A to state B labelled with a regular expression R. To convert this GNFA to an NFA, do we get rid of that arrow, instead, take NFA N that recognizes L(R), and create an arrow from A to the start state of N labelled with the epsilon symbol, then create arrows from the accept states of N to B, each also labelled with the epsilon symbol?

Of course the accept states of N would no longer be accept states in the new machine, would they?

I know that GNFAs are equivalent to NFAs but I need a convincing proof, not just a short paragraph mentioning their equivalence.

#### Algorithms that are similar to Dynamic TIme Warping

Dynamic time warping (DTW) is an algorithm in time series analysis for measuring similarity between two temporal sequences which may vary in time or speed. Here are some explanations of DTW:

1. Dynamic Time Warping by Wikipedia
2. Dynamic Time Warping by M Müller (2007)

Is there any algorithm that can replace DTW for measuring similarity between two temporal sequences which may vary in time or speed?

### QuantOverflow

#### Searching for name business & Permit near Arlington Heights?

I'm moving Illinois region, towards the Heights, and that I am searching for one of name businesses & those personal Permit to join up my vehicle at. Does anybody know of 1 near Arlington Levels? It's difficult to reach the DMV due to my hours.

### StackOverflow

#### Clojure: difference between how special forms, functions and macros are implemented

i have just started with Clojure. I am reading this. I did not understand the difference between how special forms are implemented and how functions and macros are implemented where it says

Nearly all functions and macros are implemented in Clojure source code. The differences between functions and macros are explained later. Special forms are recognized by the Clojure compiler and not implemented in Clojure source code.

Can someone explain the difference between two things ? ( implemented in Clojure source code and not implemented in Clojure source code)

#### Slick: query multiple tables/databases with getting column names

I have methods in my Play app that query database tables with over hundred columns. I can't define case class for each such query, because it would be just ridiculously big and would have to be changed with each alter of the table on the database.

I'm using this approach, where result of the query looks like this:

Map(columnName1 -> columnVal1, columnName2 -> columnVal2, ...)


Example of the code:

implicit val getListStringResult = GetResult[List[Any]] (
r => (1 to r.numColumns).map(_ => r.nextObject).toList
)

def getSomething(): Map[String, Any] = DB.withSession {
val columns = MTable.getTables(None, None, None, None).list.filter(_.name.name == "myTable").head.getColumns.list.map(_.column)
val result = sql"""SELECT * FROM myTable LIMIT 1""".as[List[Any]].firstOption.map(columns zip _ toMap).get
}


This is not a problem when query only runs on a single database and single table. I need to be able to use multiple tables and databases in my query like this:

def getSomething(): Map[String, Any] = DB.withSession {

//The line below is no longer valid because of multiple tables/databases
val columns = MTable.getTables(None, None, None, None).list.filter(_.name.name == "table1").head.getColumns.list.map(_.column)
val result = sql"""
SELECT      *
FROM        db1.table1
LEFT JOIN   db2.table2 ON db2.table2.col1 = db1.table1.col1
LIMIT       1
""".as[List[Any]].firstOption.map(columns zip _ toMap).get

}


The same approach can no longer be used to retrieve column names. This problem doesn't exist when using something like PHP PDO or Java JDBCTemplate - these retrieve column names without any extra effort needed.

My question is: how do I achieve this with Slick?

### Planet FreeBSD

#### Happy 20th birthday FreeBSD ports tree!

It all started with this commit from Jordan Hubbard on August 21, 1994:

Commit my new ports make macros
Still not 100% complete yet by any means but fairly usable at this stage.

Twenty years later the ports tree is still there and actively
maintained. A video was prepared to celebrate the event and to thank
all of you who give some of their spare time and energy to the project!

### StackOverflow

#### Writing Body Parser with Security Trait for multipartFormData, Play Framework

I'm trying to upload an image at the same time when I submit the form, after some research I tried using mutlipartFormData to accomplish the feat. This is my form submission function header after following the tutorials.

def insert = withInsert(parse.multipartFormData) { username => implicit request =>


I used security trait to check for user (withUser), login time (withAuth) and permission (withInsert)

def withUser(f: => String => Request[AnyContent] => Result) = {
Action(request => f(user)(request))
}
}

def withAuth(f: => String => Request[AnyContent] => Result) = withUser { user => request =>
var timestamp = request.session.get("timestamp")
timestamp.map { timestamp =>
if(System.currentTimeMillis - timestamp.toLong < (3600*1000))
f(user)(request)
else
onUnauthorized(request)
}
.getOrElse{
onUnauthorized(request)
}
}

def withInsert[A](p: BodyParser[A])(f: String => Request[A] => Result) = withAuth { username => request =>

if(permission.page_insert == 1)
else
onPermissionDenied(request)
}



As the insert function needed a body parser, I modified the (withInsert) trait to support a body parser. But, then I got a compile error on this line.

Action(p)(request => f(username)(request))

type mismatch; found : play.api.mvc.Action[A] required: play.api.mvc.Result


I'm quite lost to what is wrong here, any help is greatly appreciated.

## Edit:

I've tried to do exactly what the tutorial did, abandoning the usage of withAuth on the security trait

def withInsert[A](p: BodyParser[A])(f: String => Request[A] => Result) = {
val permission = User.checkAuth("Page", user)

if(permission.page_insert == 1)
Action(p)(request => f(user)(request))
else
onPermissionDenied(request)
}
}


this code results in another compile error on the same line, but with different error

not found: value request


After removing the permission checking, it returns no compile error.

def withInsert[A](p: BodyParser[A])(f: String => Request[A] => Result) = {
Action(p)(request => f(user)(request))
}
}


But I need the program to check for permission before running functions and not just username (Current user has logged in or not). Is there a way to do this? I need a workaround so that I could apply the permission checking and the body parser to the trait.

### StackOverflow

#### Anorm string set from postgres ltree column

I have a table with one of the columns having ltree type, and the following code fetching data from it:

SQL("""select * from "queue"""")()
.map(
row =>
{
val queue =
Queue(
row[String]("path"),
row[String]("email_recipients"),
new DateTime(row[java.util.Date]("created_at")),
row[Boolean]("template_required")
)
queue
}
).toList


which results in the following error:

RuntimeException: TypeDoesNotMatch(Cannot convert notification.en.incident_happened:class org.postgresql.util.PGobject to String for column ColumnName(queue.path,Some(path)))

queue table schema is the following:

CREATE TABLE queue
(
id serial NOT NULL,
template_id integer,
template_version integer,
path ltree NOT NULL,
json_params text,
email_recipients character varying(1024) NOT NULL,
email_from character varying(128),
email_subject character varying(512),
created_at timestamp with time zone NOT NULL,
sent_at timestamp with time zone,
failed_recipients character varying(1024),
template_required boolean NOT NULL DEFAULT true,
attachments hstore,
CONSTRAINT pk_queue PRIMARY KEY (id ),
CONSTRAINT fk_queue__email_template FOREIGN KEY (template_id)
REFERENCES email_template (id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE RESTRICT
)
WITH (
OIDS=FALSE
);
ALTER TABLE queue
OWNER TO postgres;
GRANT ALL ON TABLE queue TO postgres;
GRANT SELECT, UPDATE, INSERT, DELETE ON TABLE queue TO writer;
GRANT SELECT ON TABLE queue TO reader;


Why is that? Isn't notification.en.incident_happened just an ordinary string? Or am I missing anything?

UPD:

The question still applies, but here is a workaround:

SQL("""select id, path::varchar, email_recipients, created_at, template_required from "queue"""")()


### QuantOverflow

#### How is the price of a bond actually determined?

How the price of a bond is actually determined? Is it the supply-demand that determines the price first and then the YTM is calculated on the back of this for that bond. Or is it that the changes to interest rate curve comes first and then the bond is priced using the typical discounting method and that becomes the price in stock market?

### StackOverflow

#### Jackson serialization - Type id handling not implemented for type T

I'm writing a simple file system model for a simulation. I am attempting to serialize the contents of my virtual hard drive at various times using Jackson's JSON serialization. It all seemed to work fine until I used a custom serializer for the files (to avoid some deep copies). Now I'm getting 'Type id handling not implemented for type DataFile' errors when I attempt to serialize.

To further complicate matters, I'm using scala, but I can use Java collections for the serialization if need be.

Can someone explain Type id handling? Do I need to put it on the interface (IDataFile in this case) or the concrete classes?

I have read this: http://programmerbruce.blogspot.com/2011/05/deserialize-json-with-jackson-into.html, though it appears that at least some of this is old. I'm using Jackson 2.4.2

Thanks!

### StackOverflow

#### Suppress the printing of the data an atom holds in the REPL? (or ref, agent, ...)

The following is perfectly valid Clojure code:

(def a (atom nil))
(def b (atom a))
(reset! a b)


it is even useful in situations where back references are needed.

However, it is annoying to work with such things in the REPL: the REPL will try to print the content of such references whenever you type a or b, and will, of course, generate a stack overflow error pretty quickly.

So is there any way to control/change the printing behaviour of atoms/refs/agents in Clojure? Some kind of cycle detection would be nice, but even the complete suppression of the deref'ed content would be really useful.

#### Building a tree from Stream using Scala

I want to build a tree which is read from a file of random height in this exact format,

       1
2 3
4 5 6
. . . .
. . . . .


using the following structure

case class Tree[+T](value : T, left :Option[Tree[T]], right : Option[Tree[T]])


The challenge I am facing is I have to read until the last line before I can build the tree because left and right is read only. The way I tried was,

1. Read the first line, create an node with a value (1) with left and right set to None.
2. Read the second line, create nodes with values (2) and (3), left and right set to None. This time, a new node (1) is created with left = node(2) and right = node(3).
3. Read the third line, create nodes with values (4), (5) and (6), with left and right set to None. Create new node(2) and node(3) with node(2) -> node(4) and node(5) and node(3) -> node(5) and node(6) and finally, node(1) -> new node(2) and node(3)
4. Repeat until end of line.

At the end of it, I should have this relationship,

         1
/  \
2    3
/ \   / \
4   5 5  6
/ \ /\ /\ / \
.  .. . .. .  .


Any good advice for me? Thanks

#### Issue with using wildcard parameter twice in a case class

As peers the example below, I am trying to make a case class that can hold items of type SomeResult[T] without having to know what T is. This works fine in the case of Rawr, which can hold a Set of SomeResult[_], however when I add a second field to try and work with on the same principle (i.e. a single element who's content we don't care about, and a set of elements), I get the following error

[error] /Users/matthewdedetrich/temp/src/main/scala/Main.scala:15: type arguments [_$2] do not conform to class SomeResult's type parameter bounds [A <: T] [error] case class Bleh(oneThing:SomeResult[_],moreThings:Set[SomeResult[_]]) // This doesn't  Here is the sample code trait T { } case class First(int:Int) extends T case class Second(int:Int) extends T case class SomeResult[A <: T](name:String, t:A) case class Rawr(multipleThings:Set[SomeResult[_]]) // This works case class Bleh(oneThing:SomeResult[_],moreThings:Set[SomeResult[_]]) // This doesn't  There is a suggestion to use a [+A <: T] as a type bound instead of a wildcard, however the following code doesn't work when doing this val t = Set(First(3),Second(5)) def someFunc[A <: T](thing:A) = { thing match { case First(_) => SomeResult("a",First(10)) case Second(_) => SomeResult("b",Second(15)) case _ => throw new IllegalArgumentException("rawr") } } val z = t.map{ case x:First => someFunc(First(5)) case y:Second => someFunc(Second(5)) case _ => throw new IllegalArgumentException("rawr") } val z2 = Rawr(z)  Which then provides the error [error] found : scala.collection.immutable.Set[SomeResult[Product with Serializable with T]] [error] required: Set[SomeResult[T]] [error] Note: SomeResult[Product with Serializable with T] <: SomeResult[T], but trait Set is invariant in type A. [error] You may wish to investigate a wildcard type such as _ <: SomeResult[T]. (SLS 3.2.10)  Which is why I used wildcard types in the first place. Funnily enough, if you try to provide a return type to sumFunc, you get the exact same problem (where the scala compiler error suggests that you should use Wildcard types) EDIT 2: I have actually managed to get the code to compile by doing this  def someFunc[A <: T](thing:A):SomeResult[A] = { thing match { case First(_) => SomeResult("a",First(10)).asInstanceOf[SomeResult[A]] case Second(_) => SomeResult("b",Second(15)).asInstanceOf[SomeResult[A]] case _ => throw new IllegalArgumentException("rawr") } } def z[A <: T]:Set[SomeResult[A]] = t.map{ case x:First => someFunc(First(5)).asInstanceOf[SomeResult[A]] case y:Second => someFunc(Second(5)).asInstanceOf[SomeResult[A]] case _ => throw new IllegalArgumentException("rawr") }  Im not sure if its "idiomatic" or "right", but its the only way to get the Serializable with Product out of the type signature. I have no idea why Scala infers this when the result type is clearly a subtype of T #### How to compile a spark-cassandra programs using scala? Lately I started learning spark and cassandra, I know that we can use spark in both python and scala and java, and I 've read docs on this website: https://github.com/datastax/spark-cassandra-connector/blob/master/doc/0_quick_start.md, the thing is, after I create a program named testfile.scala with those codes the document says,(I don't know if I am right using .scala), however, i don't know how to compile it,can anyone guide me what to do with it? Here are the testfile.scala: import com.datastax.spark.connector._ import com.datastax.spark.connector.streaming._ val conf = new SparkConf(true).set("spark.cassandra.connection.host", "127.0.0.1") val sc = new SparkContext("spark://127.0.0.1:7077", "test", conf) val ssc = new StreamingContext(conf, Seconds(n)) val stream = ssc.actorStream[String](Props[SimpleStreamingActor], actorName, StorageLevel.MEMORY_AND_DISK) val wc = stream.flatMap(_.split("\\s+")).map(x => (x, 1)).reduceByKey(_ + _).saveToCassandra("streaming_test", "words", SomeColumns("word", "count")) val rdd = sc.cassandraTable("test", "kv") println(rdd.count) println(rdd.first) println(rdd.map(_.getInt("value")).sum)  #### Generic Spray-Client I'm trying to create a generic HTTP client in Scala using spray. The following is my HttpClient class trying it's best to be generic: package services import akka.actor.ActorSystem import akka.event.Logging import spray.client.pipelining._ import spray.http.{BasicHttpCredentials, HttpRequest} import spray.httpx.marshalling.Marshaller import spray.httpx.unmarshalling._ import scala.concurrent.Future import utils.AllJsonFormats._ import models.api._ object HttpClient extends HttpClient class HttpClient { implicit val system = ActorSystem("api-spray-client") import system.dispatcher val log = Logging(system, getClass) def httpSaveGeneric[T1:Marshaller,T2:Unmarshaller](uri: String, model: T1, username: String, password: String): Future[T2] = { val pipeline: HttpRequest => Future[T2] = logRequest(log) ~> sendReceive ~> logResponse(log) ~> unmarshal[T2] pipeline(Post(uri, model)) } val genericResult = httpSaveGeneric[Space,Either[Failure,Success]]( "http://", Space("123", IdName("456", "parent"), "my name", "short_name", Updated("", 0)), "user", "password") }  utils.AllJsonFormats has the following declaration. It contains all the model formats. The same class is used on the "other end" i.e. I also wrote the API and used the same formatters there with spray-can and spray-json. object AllJsonFormats extends DefaultJsonProtocol with SprayJsonSupport with MetaMarshallers with MetaToResponseMarshallers with NullOptions {  Of course that object has definitions for the serialization of the models.api.Space, models.api.Failure and models.api.Success. The Space type seems fine, i.e. when I tell the generic method that it will be receiving and returning a Space, no errors. But once I bring an Either into the method call, I get the following compiler error: could not find implicit value for evidence parameter of type spray.httpx.unmarshalling.Unmarshaller[Either[models.api.Failure,models.api.Success]]. My expectation was that the either implicit in spray.json.DefaultJsonProtocol, i.e. in spray.json.StandardFormts, would have me covered. #### How is val in scala different from var in java? Anyone care to elaborate on how val in scala is different from const in java? What are the technical differences? I believe I understand what "const" is in c++ and java. I get the feeling that "val" is somehow different and better in some sense but I just can't put my finger on it. Thanks ### /r/compsci #### Logic Puzzle: Count the Flags (with solution) ### StackOverflow #### Waiting for three seconds between HTTP requests to specific URL Using spray, I want to have a system that waits for some seconds between it sends two HTTP requests to specific URL, because I don't want to mess up the server's traffic for my app's auto connections. How do you do it? I can make it by putting the command in every place where it needs to pause, but I figured it's not looking cool and hard to maintain afterwords. I would love if it can be abstracted into the level of ActorSystem. Thank you! ### Portland Pattern Repository #### In Soviet Russia Jokes Now Required (by 173-23-242-215.client.mchsi.com 16 hours ago) #### The User (by d58-111-167-142.meb802.vic.optusnet.com.au 16 hours ago) ### Lobsters #### Ftrace: The hidden light switch ### StackOverflow #### How to find unused sbt dependencies? My build.sbt has a lot of dependencies now. How do I know which dependencies are actually being used? Maven seems to have dependency:analyse http://maven.apache.org/plugins/maven-dependency-plugin/ Is there something similar for sbt? ### /r/emacs #### Emacs Theme Gallery ### Wes Felter #### EnterpriseTech: IBM Techies Pit Docker, KVM Against Linux Bare Metal ### StackOverflow #### C++ functional & generic programming [with MySQL connector example] I am going to use MySQL connector. They provide functions to access the result row. Some examples are getString(1), getInt(1), getDate(2). The number inside the parenthesis is about the index of the result. So that I have to use the following code to access this example row: 'John', 'M', 34 string name = row.getString(1); string sex = row.getString(2); int age = row.getInt(3);  I would like to try generic programming for various reasons (mainly for fun). But it was quite disappointing that I can't make it happens even with much time used. The final result that I want: std::tie<name, sex, age> = row.getResult<string, string, int>();  This functions should call the corresponding MySQL API. It is also good to see any answer similar to below, although the syntax is wrong. std::tie<name, sex, age> = row.getResult([string, string, int]);  Please don't suggest using for-loop. Let's try something more generic & functional ;-) ### /r/netsec #### ASM: A Programmable Interface for Extending Android Security [PDF] ### Wes Felter #### Mike Day: Cloud Operating Systems for Servers #### Darren Shepherd: Containers as a Service (CaaS) Is the Cloud Operating System #### "What was exciting about the XMPP protocol itself? Were people back then just excited to be in the..." “What was exciting about the XMPP protocol itself? Were people back then just excited to be in the presence of vast amounts of XML? I mean, that’d explain a lot.” - astrange looks back at Google Wave ### StackOverflow #### why self-type class can declare class I know Scala can only mixin traits, it makes sense for dependency injection and cake pattern. My question is why I can still declare a class which need another "class" but not trait. Code: class C class D { self : C =>}  This is still complied successfully. I thought it should failed compiled, because at this point how can new instance D (C is class not trait). Edit: when try to instantiate D: new D with C //compilation fail class C needs to be a trait to be mixed in. ### Planet FreeBSD #### FreeBSD Foundation August Update Now Available The FreeBSD Foundation August Update is now available. Get the latest Foundation news at: https://www.freebsdfoundation.org/press/2014augupdate.pdf #### EuroBSDCon 2014 Travel Grant Deadline Extended The deadline for submitting your application for a Travel Grant to EuroBSDCon 2014 has been extended. Please submit your application by Friday, August 22, 2014. Find out more at: https://www.freebsdfoundation.org/announcements#eurobsdcon2014 ### StackOverflow #### What's the type of => String in scala? In scala, there is some call-by-name parameters: def hello(who: => String) = println("hello, " + who)  What's the type of the parameter who? It shows the function on scala REPL as: hello: (who: => String)Unit  Is the type still => String? Is there any name for it? Or some documentation to describe the type? ## Further questions raised by answer ### Question 1 (When reading the spec of §3.3.1 (MethodTypes)) Method type is the type of a method, say I defined a method hello: def hello: String = "abc"  The type of the it can be written as: => String, right? Although you can see the REPL response is: scala> def hello:String = "abc" hello: String  If I define a method which has parameters: def goodname(name: String): String = name + "!"  What's the type of the method? It should be similar to String => String, but not. Because it's a method type, and String => String is a function type. ### Question 2 (When reading the spec of §3.3.1 (MethodTypes)) I can understand this as: def goodname(name: String): String = name + "!" def print(f: String => String) = println(f("abc")) print(goodname)  When I call print(goodname), the type of goodname will be converted to the function type String => String, right? But for paramless method: def hello: String = "abc"  What function type can it be converted? I tried: def print(f: () => String) = println(f())  But this can't be compiled: print(hello)  The error is: error: type mismatch; found : String required: () => String Could you give me an example that works? ### Question 3 (When reading the spec of §6.26.2 (MethodConversions)) This Evaluation conversion is only happened when the type is not applied to argument. So, for code: def myname:String = "abc" def print(name: => String) = println(name) print(myname)  My question is, when I call print(myname), is there conversion(I mean Evaluation conversion) happened? I guess, since the type of myname is just => String, so it can be passed to print directly. If the print method has changed: def myname:String = "abc" def print(name: String) = println(name) print(myname)  Here the Evaluation conversion is definitely happened, right?(From => String to String) ### QuantOverflow #### Volatility of Futures Apparently: Under a constant interest rate, the futures price is given by a deterministic time function times the asset price (I think I understand this). This means that the volatility of the futures price should be the same as that of the underlying asset price. Not really sure how this is true. Is there any more intuitive explanation as to why this would hold? ### StackOverflow #### Does Akka have a dedicated selector for OP_ACCEPT? In many NIO based framework, it use a dedicated selector for op_accept, and use other selectors for op_write and op_read? Does Akka use the same way? #### JDT weaving is currently disabled I just installed Eclipse standard 4.4 Luna, and after installing the Scala IDE and friends I get JDT Weaving is currently disabled. The Scala IDE needs JDT Weaving to be active, or it will not work correctly. Activate JDT Weaving and Restart Eclipse? (Highly Recommended) [OK] [Cancel]  Does anyone know how to do this? Now my comments on this error message • In general error messages that tell you what to do, but not how to do it are frustrating. • The [OK] button implies that the dialog will enable it for me, but it does exactly the same as clicking the [Cancel] button. Consequently, the UI design is defective. • The preferences dialog in Luna does not show anything under JDT or Weaving. • The help search in Luna for "JTD Weaving" returns too much information to offer any simple solution. • My search via Google turns up interesting discussion on the problem, but fails to simply state the solution, or if there is one. https://groups.google.com/forum/#!msg/scala-ide-user/7GdTuQHyP4Q/aiUt70lnzigJ ### arXiv Cryptography and Security #### Simple explanation on why QKD keys have not been proved secure. (arXiv:1408.4780v1 [quant-ph]) A simple counter-example is given on the prevalent interpretation of the trace distance criterion as failure probability in quantum key distribution protocols. A summary of its ramifications is listed. #### A Proposed System for Covert Communication to Distant and Broad Geographical Areas. (arXiv:1408.4751v1 [cs.CR]) A covert communication system is developed that modulates Morse code characteristics and that delivers its mes- sage economically and to geographically remote areas using radio and EchoLink. Our system allows a covert message to be sent to a receiving individual by hiding it in an existing carrier Morse code message. The carrier need not be sent directly to the receiving person, though the receiver must have access to the signal. Illustratively, we propose that our system may be used as an alternative means of implementing numbers stations. #### A Covert Channel Using Named Resources. (arXiv:1408.4749v1 [cs.CR]) A network covert channel is created that uses resource names such as addresses to convey information, and that approximates typical user behavior in order to blend in with its environment. The channel correlates available resource names with a user defined code-space, and transmits its covert message by selectively accessing resources associated with the message codes. In this paper we focus on an implementation of the channel using the Hypertext Transfer Protocol (HTTP) with Uniform Resource Locators (URLs) as the message names, though the system can be used in conjunction with a variety of protocols. The covert channel does not modify expected protocol structure as might be detected by simple inspection, and our HTTP implementation emulates transaction level web user behavior in order to avoid detection by statistical or behavioral analysis. #### Directed Width Measures and Monotonicity of Directed Graph Searching. (arXiv:1408.4745v1 [cs.DM]) We consider generalisations of tree width to directed graphs, that attracted much attention in the last fifteen years. About their relative strength with respect to "bounded width in one measure implies bounded width in the other" many problems remain unsolved. Only some results separating directed width measures are known. We give an almost complete picture of this relation. For this, we consider the cops and robber games characterising DAG-width and directed tree width (up to a constant factor). For DAG-width games, it is an open question whether the robber-monotonicity cost (the difference between the minimal numbers of cops capturing the robber in the general and in the monotone case) can be bounded by any function. Examples show that this function (if it exists) is at least$f(k) > 4k/3$(Kreutzer, Ordyniak 2008). We approach a solution by defining weak monotonicity and showing that if$k$cops win weakly monotonically, then$O(k^2)$cops win monotonically. It follows that bounded Kelly-width implies bounded DAG-width, which has been open since the definition of Kelly-width by Hunter and Kreutzer in 2008. For directed tree width games we show that, unexpectedly, the cop-monotonicity cost (no cop revisits any vertex) is not bounded by any function. This separates directed tree width from D-width defined by Safari in 2005, refuting his conjecture. #### High Level Hardware/Software Embedded System Design with Redsharc. (arXiv:1408.4725v1 [cs.SE]) As tools for designing multiple processor systems-on-chips (MPSoCs) continue to evolve to meet the demands of developers, there exist systematic gaps that must be bridged to provide a more cohesive hardware/software development environment. We present Redsharc to address these problems and enable: system generation, software/hardware compilation and synthesis, run-time control and execution of MPSoCs. The efforts presented in this paper extend our previous work to provide a rich API, build infrastructure, and runtime enabling developers to design a system of simultaneously executing kernels in software or hardware, that communicate seamlessly. In this work we take Redsharc further to support a broader class of applications across a larger number of devices requiring a more unified system development environment and build infrastructure. To accomplish this we leverage existing tools and extend Redsharc with build and control infrastructure to relieve the burden of system development allowing software programmers to focus their efforts on application and kernel development. #### Code Generation for High-Level Synthesis of Multiresolution Applications on FPGAs. (arXiv:1408.4721v1 [cs.CV]) Multiresolution Analysis (MRA) is a mathematical method that is based on working on a problem at different scales. One of its applications is medical imaging where processing at multiple scales, based on the concept of Gaussian and Laplacian image pyramids, is a well-known technique. It is often applied to reduce noise while preserving image detail on different levels of granularity without modifying the filter kernel. In scientific computing, multigrid methods are a popular choice, as they are asymptotically optimal solvers for elliptic Partial Differential Equations (PDEs). As such algorithms have a very high computational complexity that would overwhelm CPUs in the presence of real-time constraints, application-specific processors come into consideration for implementation. Despite of huge advancements in leveraging productivity in the respective fields, designers are still required to have detailed knowledge about coding techniques and the targeted architecture to achieve efficient solutions. Recently, the HIPAcc framework was proposed as a means for automatic code generation of image processing algorithms, based on a Domain-Specific Language (DSL). From the same code base, it is possible to generate code for efficient implementations on several accelerator technologies including different types of Graphics Processing Units (GPUs) as well as reconfigurable logic (FPGAs). In this work, we demonstrate the ability of HIPAcc to generate code for the implementation of multiresolution applications on FPGAs and embedded GPUs. #### Making FPGAs Accessible to Scientists and Engineers as Domain Expert Software Programmers with LabVIEW. (arXiv:1408.4715v1 [cs.SE]) In this paper we present a graphical programming framework, LabVIEW, and associated language and libraries, as well as programming techniques and patterns that we have found useful in making FPGAs accessible to scientists and engineers as domain expert software programmers. #### Non-predetermined Model Theory. (arXiv:1408.4681v1 [math.LO]) This article introduce a new model theory call non-predetermined model theory where functions and relations need not to be determined already and they are determined through time. #### Incremental Cardinality Constraints for MaxSAT. (arXiv:1408.4628v1 [cs.LO]) Maximum Satisfiability (MaxSAT) is an optimization variant of the Boolean Satisfiability (SAT) problem. In general, MaxSAT algorithms perform a succession of SAT solver calls to reach an optimum solution making extensive use of cardinality constraints. Many of these algorithms are non-incremental in nature, i.e. at each iteration the formula is rebuilt and no knowledge is reused from one iteration to another. In this paper, we exploit the knowledge acquired across iterations using novel schemes to use cardinality constraints in an incremental fashion. We integrate these schemes with several MaxSAT algorithms. Our experimental results show a significant performance boost for these algo- rithms as compared to their non-incremental counterparts. These results suggest that incremental cardinality constraints could be beneficial for other constraint solving domains. #### EURETILE D7.3 - Dynamic DAL benchmark coding, measurements on MPI version of DPSNN-STDP (distributed plastic spiking neural net) and improvements to other DAL codes. (arXiv:1408.4587v1 [cs.DC]) The EURETILE project required the selection and coding of a set of dedicated benchmarks. The project is about the software and hardware architecture of future many-tile distributed fault-tolerant systems. We focus on dynamic workloads characterised by heavy numerical processing requirements. The ambition is to identify common techniques that could be applied to both the Embedded Systems and HPC domains. This document is the first public deliverable of Work Package 7: Challenging Tiled Applications. #### Experiments Validating the Effectiveness of Multi-point Wireless Energy Transmission with Carrier Shift Diversity. (arXiv:1408.4539v1 [cs.NI]) This paper presents a method to seamlessly extend the coverage of energy supply field for wireless sensor networks in order to free sensors from wires and batteries, where the multi-point scheme is employed to overcome path-loss attenuation, while the carrier shift diversity is introduced to mitigate the effect of interference between multiple wave sources. As we focus on the energy transmission part, sensor or communication schemes are out of scope of this paper. To verify the effectiveness of the proposed wireless energy transmission, this paper conducts indoor experiments in which we compare the power distribution and the coverage performance of different energy transmission schemes including conventional single-point, simple multi-point and our proposed multi-point scheme. To easily observe the effect of the standing-wave caused by multipath and interference between multiple wave sources, 3D measurements are performed in an empty room. The results of our experiments together with those of a simulation that assumes a similar antenna setting in free space environment show that the coverage of single-point and multi-point wireless energy transmission without carrier shift diversity are limited by path-loss, standing-wave created by multipath and interference between multiple wave sources. On the other hand, the proposed scheme can overcome power attenuation due to the path-loss as well as the effect of standing-wave created by multipath and interference between multiple wave sources. #### Laplace Functional Ordering of Point Processes in Large-scale Wireless Networks. (arXiv:1408.4528v1 [cs.IT]) Stochastic orders on point processes are partial orders which capture notions like being larger or more variable. Laplace functional ordering of point processes is a useful stochastic order for comparing spatial deployments of wireless networks. It is shown that the ordering of point processes is preserved under independent operations such as marking, thinning, clustering, superposition, and random translation. Laplace functional ordering can be used to establish comparisons of several performance metrics such as coverage probability, achievable rate, and resource allocation even when closed form expressions of such metrics are unavailable. Applications in several network scenarios are also provided where tradeoffs between coverage and interference as well as fairness and peakyness are studied. Monte-Carlo simulations are used to supplement our analytical results. #### Monoids with tests and the algebra of possibly non-halting programs. (arXiv:1408.4498v1 [math.LO]) We study the algebraic theory of computable functions, which can be viewed as arising from possibly non-halting computer programs or algorithms, acting on some state space, equipped with operations of composition, {\em if-then-else} and {\em while-do} defined in terms of a Boolean algebra of conditions. It has previously been shown that there is no finite axiomatisation of algebras of partial functions under these operations alone, and this holds even if one restricts attention to transformations (representing halting programs) rather than partial functions, and omits {\em while-do} from the signature. In the halting case, there is a natural "fix", which is to allow composition of halting programs with conditions, and then the resulting algebras admit a finite axiomatisation. In the current setting such compositions are not possible, but by extending the notion of {\em if-then-else}, we are able to give finite axiomatisations of the resulting algebras of (partial) functions, with {\em while-do} in the signature if the state space is assumed finite. The axiomatisations are extended to consider the partial predicate of equality. All algebras considered turn out to be enrichments of the notion of a (one-sided) restriction semigroup. #### On Optimal Decision-Making in Ant Colonies. (arXiv:1408.4487v1 [cs.DC]) Colonies of ants can collectively choose the best of several nests, even when many of the active ants who organize the move visit only one site. Understanding such a behavior can help us design efficient distributed decision making algorithms. Marshall et al. propose a model for house-hunting in colonies of ant Temnothorax albipennis. Unfortunately, their model does not achieve optimal decision-making while laboratory experiments show that, in fact, colonies usually achieve optimality during the house-hunting process. In this paper, we argue that the model of Marshall et al. can achieve optimality by including nest size information in their mathematical model. We use lab results of Pratt et al. to re-define the differential equations of Marshall et al. Finally, we sketch our strategy for testing the optimality of the new model. #### Undecidability of Finite Model Reasoning in DLFD. (arXiv:1408.4468v1 [cs.DB]) We resolve an open problem concerning finite logical implication for path functional dependencies (PFDs). #### Bounds for variables with few occurrences in conjunctive normal forms. (arXiv:1408.0629v1 [math.CO] CROSS LISTED) We investigate connections between SAT (the propositional satisfiability problem) and combinatorics, around the minimum degree (occurrence) of variables in various forms of redundancy-free boolean conjunctive normal forms (clause-sets). Lean clause-sets do not have non-trivial autarkies, that is, it is not possible to satisfy some clauses and leave the other clauses untouched. The deficiency of a clause-set is the difference of the number of clauses and the number of variables. We prove a sharp upper bound on the minimum variable degree in dependency on the deficiency. If a clause-set does not fulfil this upper bound, then it must have a non-trivial autarky; we show that the autarky-reduction (elimination of affected clauses) can be done in polynomial time, while it is open to find the autarky itself in polynomial time. Then we investigate this upper bound for the special case of minimally unsatisfiable clause-sets. Here we show that the bound can be improved. We consider precise relations, and the investigations have a certain number-theoretical flavour. We try to build a bridge from logic to combinatorics (especially to hypergraph colouring), and thus we discuss thoroughly the background and open problems, and provide many examples and explanations. #### Division by zero in non-involutive meadows. (arXiv:1406.2092v1 [math.RA] CROSS LISTED) Meadows have been proposed as alternatives for fields with a purely equational axiomatization. At the basis of meadows lies the decision to make the multiplicative inverse operation total by imposing that the multiplicative inverse of zero is zero. Thus, the multiplicative inverse operation of a meadow is an involution. In this paper, we study non-involutive meadows', i.e.\ variants of meadows in which the multiplicative inverse of zero is not zero, and pay special attention to non-involutive meadows in which the multiplicative inverse of zero is one. #### Max-Sum Diversification, Monotone Submodular Functions and Dynamic Updates. (arXiv:1203.6397v2 [cs.DS] UPDATED) Result diversification is an important aspect in web-based search, document summarization, facility location, portfolio management and other applications. Given a set of ranked results for a set of objects (e.g. web documents, facilities, etc.) with a distance between any pair, the goal is to select a subset$S$satisfying the following three criteria: (a) the subset$S$satisfies some constraint (e.g. bounded cardinality); (b) the subset contains results of high "quality"; and (c) the subset contains results that are "diverse" relative to the distance measure. The goal of result diversification is to produce a diversified subset while maintaining high quality as much as possible. We study a broad class of problems where the distances are a metric, where the constraint is given by independence in a matroid, where quality is determined by a monotone submodular function, and diversity is defined as the sum of distances between objects in$S$. Our problem is a generalization of the {\em max sum diversification} problem studied in \cite{GoSh09} which in turn is a generaliztion of the {\em max sum$p$-dispersion problem} studied extensively in location theory. It is NP-hard even with the triangle inequality. We propose two simple and natural algorithms: a greedy algorithm for a cardinality constraint and a local search algorithm for an arbitary matroid constraint. We prove that both algorithms achieve constant approximation ratios. ### /r/emacs #### An emacs theme gallery I can confirm it works on Firefox, Chrome, Safari and IE (10, 11). You can find a detailed description of the project here: https://github.com/pawelbx/emacs-theme-gallery submitted by pawelb [link] [comment] ### Planet Theory #### Tractable Pathfinding for the Stochastic On-Time Arrival Problem Authors: Mehrdad Niknami, Samitha Samaranayake, Alexandre Bayen Download: PDF Abstract: We present a new technique for fast computation of the route that maximizes the probability of on-time arrival in stochastic networks, also known as the path-based stochastic on-time arrival (SOTA) problem. We utilize the solution to the policy-based SOTA problem, which is of pseudopolynomial time complexity in the time budget of the journey, as a heuristic for efficiently computing the optimal path. We also introduce Arc-Potentials, an extension to the Arc-Flags pre-processing algorithm, which improves the efficiency of the graph pre-processing and reduces the computation time. Finally, we present extensive numerical results demonstrating the effectiveness of our algorithm and observe that its running time when given the policy (which can be efficiently obtained using pre-processing) is almost always linear in the length of the optimal path for our test networks. ### StackOverflow #### Find implicit value by abstract type member With a type like trait A[T], finding an implicit in scope is simply implicitly[A[SomeType]] Can this be done and, if so, how is this done where the type-parameter is replaced with an abstract type member, like in trait A { type T }? ### HN Daily #### Daily Hacker News for 2014-08-20 ## August 20, 2014 ### StackOverflow #### lein - how to use a downloaded library Let's say I find a cool clojure library like https://github.com/clojurewerkz/buffy Now I want to use it. And it only lives on github. How do I do this? I would love a full start to finish hello world example. I've read about compiling it as a jar and using that, or using :dependencies in my project.clj but so far no examples have been complete, and I'm new. For example in python I'd git clone the repo into the root of my working tree and any file could just say import buffy ### Planet Theory #### Simons-Ber​keley Research Fellowship​s in Cryptograp​hy for Summer 2015 The Simons Institute for the Theory of Computing at UC Berkeley invites applications for Research Fellowships for the research program on Cryptography that will take place in Summer, 2015. These Fellowships are open to outstanding junior scientists (at most 6 years from PhD by 1 May, 2015). Further details and application instructions can be found at simons.berkeley.edu/fellows-summer2015. General information about the Simons Institute can be found at simons.berkeley.edu, and about the Cryptography program at simons.berkeley.edu/programs/crypto2015. Deadline for applications: 30 September, 2014. ### UnixOverflow #### What are the differences between socket polling mechanisms of kqueue and epolling? kqueue socket polling mechanism is used in FreeBSD and epolling in Linux. I would like to know what are the differences between the two mechanisms? ### StackOverflow #### Converting a java.util.Set to java.util.List in Scala While in a project that is a mix of Scala and Java, I need to convert a Java Set into a Java List while in the Scala portion of the code. What are some efficient ways of doing this? I could potentially use JavaConverters to convert Java Set -> Scala Set -> Scala List -> Java List. Are there other options that would be more efficient? Thanks ### /r/compilers #### Great book for beginning compiler developers [in c] ### StackOverflow #### Calculating percent difference between elements in a list with functional programming in Mathematica? This stems from a related discussion, How to subtract specific elements in a list using functional programming in Mathematica? How does one go about easily calculating percent differences between values in a list? The linked question uses Differences to easily calculate absolute differences between successive elements in a list. However easy the built-in Differences function makes that particular problem, it still leaves the question as to how to perform different manipulations. As I mentioned earlier, I am looking to now calculate percent differences. Given a list of elements, {value1, value2, ..., valueN}, how does one perform an operation like (value2-value1)/value1 to said list? I've tried finding a way to use Slot or SlotSequence to isolate specific elements and then apply a custom function to them. Is this the most efficient way to do something like this (assuming that there is a way to isolate elements and perform operations on them)? ### TheoryOverflow #### Compute "must-pass" nodes between two nodes in a flow graph (a directed graph with a start vertex) Suppose I have a flow graph, i.e., a directed graph with a start vertex. Let p and q be two different nodes of the graph, I would like to find the nodes that have to be passed through when a path goes through d and u sequentially. For example, in the flow graph below, the must-pass node between 'b' and 'd' is 'c', and there is no must-pass nodes between a and d. Given 2 nodes of a flow graph, is there a general algorithm that identifies must-pass nodes? Thank you. , ### Portland Pattern Repository #### Matt Stephenson (by 99-98-229-88.lightspeed.mssnks.sbcglobal.net 19 hours ago) ### /r/compsci #### Is it worth taking a lower level scientific computing class? I'm a physics phd student with a bit of scientific computing experience. Over the next year I will be taking a computational fluids course which is relevant to my field of study. Now I'm planning on getting a graduate level minor in computer science too, and to do so I will need to take two more cs classes. Do you guys think I will learn anything that I wouldn't on my own from the introductory scientific computing classes? Or should I look towards higher level classes? submitted by heart_of_gold1 [link] [1 comment] ### QuantOverflow #### What is the effect of dividend yield being greater than the risk-free rate to American options pricing? Even though dividends are discrete, literature often makes the assumption of continuous dividends (mostly in the case of indices but the individual stocks as well). The dividend yield denoted by q is often considered as an adjustment to the risk free rate (i.e. r-q). My question is, what happens to American Call options if r-q < 0? Is it now possible to exercise before maturity so it can no longer be calculated as a European option? Logic says you can early exercise but I am not sure. Some footnote: In discrete dividend case we know that we should only exercise American Calls before maturity if the excess value of the option is less than the dividend. Otherwise value of the American Option will always be greater than the exercise price. This is mainly due to r > 0, and in the rare case of r < 0 American Puts become equivalent to European Puts. ### /r/compsci ### StackOverflow #### Nesting json - Play 2.3 I'm trying to nest json like this: case class Foo(id:Int, a:String, b:String) def barJson = Json.obj("hello" -> "hi") def getFooJson = Json.obj { "foos" -> Json.arr { fooTable.list.map { foo => Json.toJson(foo) + barJson } } }  But I'm getting this error: type mismatch; [error] found : play.api.libs.json.JsObject [error] required: String  What am I doing wrong here & how can I fix it? The result I'm going after is something like this: "foos":[ { "a":"hi", "b":"bye", "bar": { "hello": "bye" } }, { "a":"hi2", "b":"bye2", "bar": { "hello": "bye" } } ]  ### /r/freebsd #### What are the best practices for configuring /tmp on FreeBSD? I noticed that FreeBSD 10.0 doesn't by default create a ramdisk for /tmp. Is there any reason why this is or why I should not do so? Thanks for reading! submitted by good_names_all_taken [link] [7 comments] ### QuantOverflow #### Approximation of different volatilities Suppose I model the forward swap rate lognormal $$dS_t = \sigma_{ln}S_tdW_t$$ On the other hand we could model it simply by a normal assumption: $$dS_t = \sigma_{n}dW_t$$ I would like to know if there is a relationship for the volatilities$\sigma_n,\sigma_{ln}$? A friend told me, that he saw the approximation $$\sigma_n\approx \sigma_{ln}S_t$$ However, neither my friend nor I was able to come up with a justification of this approximation. So is this a valid approximation? If so, why and if not, how else can I relate the two volatilities? ### StackOverflow #### Modelling producer-consumer semantics with typeclasses? If some entities in a system can function as producers of data or events, and other entities can function as consumers, does it make sense to externalize these "orthogonal concerns" into Producer and Consumer typeclasses? I can see that the Haskell Pipes library uses this approach, and appreciate this question may look pretty basic for people coming from a Haskell background, but would be interested in a Scala perspective and examples because I don't see a lot. #### Java functional map() with threading I have an array of many hundreds of thousands of elements and I need to run a time consuming operation on each. I'm hesitant to use Executor due to the shear number of elements, is there any way I can do the computation on all the elements utilising multithreading without rolling my own solution? ### Lobsters #### How to run your own e-mail server with your own domain, part 1 ### /r/netsec #### MAST: An Obfuscation Toolkit ### StackOverflow #### List of options: equivalent of sequence in Scala? What is the equivalent of Haskell's sequence in Scala? I want to turn list of options into an option of list. It should come out as None if any of the options is None. List(Some(1), None, Some(2)).??? --> None List(Some(1), Some(2), Some(3)).??? --> Some(List(1, 2, 3))  #### How to access command line parameters in build definition? I'd like to be able to modify some build tasks in response to command line parameters. How do I (can I?) access command line parameters from the Build.scala file? ### CompsciOverflow #### Proving number of internal nodes in the subtree rooted at any node x of Red Black trees Reading Lemma 13.1 from the book Introduction to Algorithms, 3rd Edition To prove: A red black tree with n nodes has height at most 2 lg(n+1) First it attempts to prove : the subtree rooted at any node x contains at least 2$^{bh(x)}$-1 internal nodes (with internal nodes referring to normal nodes those with keys and external nodes referring to null pointers pointing out of leaf nodes). It proceeds with consideration: x being an internal node with positive height and two children. Then it says these exact words: Each child has a black height of either bh(x) or bh(x)-1, depending on whether its color is red of black, respectively. Since height of a child of a x is less than the height of x itself, we can apply the inductive hypothesis to conclude that each child has at least 2$^{bh(x)}$-1 internal nodes. Thus, the subtree rooted at x contains at least$(2^{bh(x)-1}-1)+(2^{bh(x)-1}-1)+1 = 2^{bh(x)}-1$internal nodes. I dont get this much. Especially first two sentences. My poor maths may be. ### /r/compsci #### 1 KB Hard Drive in Vanilla Minecraft ### AWS #### New SSL Features for Amazon CloudFront - Session Tickets, OCSP Stapling, Perfect Forward Secrecy You probably know that you can use Amazon CloudFront to distribute your content to users around the world with a high degree of security, low latency and high data transfer speed. CloudFront supports the use of secure HTTPS connections from the origin to the edge and from the edge to the client; if you enable this option data travels from the origin to your end users in a secure, encrypted form. Today we are making some additional improvements to the performance and security of CloudFront's SSL implementation. These features are enabled automatically and work with the default CloudFront SSL certificate as well as custom (SNI and Dedicated IP) SSL certificates. Performance Enhancements We have improved the performance of SSL connections with the use of Session Tickets and OCSP Stapling. Both of these features are built in to the SSL protocol and you don't have to make any code or configuration changes in order to use them. In other words, you (and your users) are already benefitting from these improvements. SSL Session Tickets - As part of the SSL handshake process, the client and the server exchange multiple packets as part of a negotiation ritual that results in agreement to use a particular encryption model (cipher) and certificate. This process entails multiple round trips and a fair amount of computation on both ends which adds some latency to the connection process. This process has to be repeated if the connection is broken. To avoid some of this rigmarole while keeping the connection secure, CloudFront now implements SSL Session Tickets. After the negotiation is complete, the SSL server creates an encrypted session ticket and returns it to the client. Later, the client can present this ticket to the server as an alternative to a full negotiation when resuming or restarting a connection. The ticket reminds the server of what they have already agreed to as part of an earlier SSL handshake. OCSP Stapling - An SSL certificate must be validated before it can be used. The certificate authority (CA) for the certificate must be consulted in order to ensure that the certificate is legitimate and that it has not been revoked. In the absence of support for OCSP Stapling, the client (e.g. a web browser) will take care of this interaction with the CA, once again at the cost of some round trips and the associated latency they bring. CloudFront now implements OCSP Stapling. This approach moves the burden of domain name resolution (to locate the CA) and certificate validation over to CloudFront, where the results can be cached and then attached (stapled, hence the name) to one of the packets in the SSL handshake. The clients no longer need to handle the domain name resolution or certificate validation and benefits from the work done on the server. Security Enhancements We have added support for Perfect Forward Secrecy and newer SSL ciphers. Perfect Forward Secrecy - This feature creates a new private key for each SSL session. In the event that a private key for a session was discovered, it could be used only to decode that session and no other, past or future. Newer Ciphers - CloudFront now supports a set of advanced RSA-AES ciphers. The server and the client agree on a cipher automatically as part of the SSL handshake process. Available Now These new features are available now at no extra charge and you may already be using them today! See the CloudFront Pricing page for more information. -- Jeff; ### StackOverflow #### Make S.M.A.R.T. hex dump readable I have 3DM2 (3ware raid manager) installed on a server Running FreeBSD. In 3DM2 I can get a hex dump of harddisk s.m.a.r.t. data (probably not needed in this question, bit it looks like this: 0A 00 01 0F 00 75 63 53 FD 63 08 00 00 00 03 03 00 61 61 00 00 00 00 00 00 00 04 32 00 64 64 70 00 00 00 00 00 00 05 33 00 64 64 00 00 00 00 00 etc.) Is there a tool that I can use to convert is to something user-readable/understandable? #### Why my MacVIM and terminal vi looks different? I'm using both console and GUI VIM. Cannot understand why my GUI vim shows different color palette and different parentheses colors (Rainbow parentheses plugin) Console vim is in the left (and it seems to be better): ### DragonFly BSD Digest #### New dhclient and other improvements DragonFly’s dhclient will now retry failed interfaces and handle being re-run gracefully. This is a blessing for anyone who has had a flaky link. Matthew Dillon’s made two other improvements for booting that will also improve boot time when networks go missing. ### TheoryOverflow #### Finding a point outside of each of a set of polygons in a bounded space I know there are algorithms for finding a point inside a simple polygon. Given a set of polygons inside a rectangle (think a bunch of polygons on a computer screen), is there an efficient algorithm for finding a point that is inside the rectangle but not inside any of the specified polygons? (Note that these polygons don't overlap, but may share a common border (or part of a border).) ### StackOverflow #### wget in path does not work in freebsd I have just installed a freeBSD 7 in my VMWare.however I found no wget in this os.so I download wget-1.15.tar.gz from websit. and then install wget on my os. then I meet this strange question below. # wget wget: Command not found. # whereis wget wget: /usr/local/bin/wget # env | grep PATH PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin # ln -s /usr/local/bin/wget /bin # whereis wget wget: /bin/wget # wget wget: Command not found. # /bin/wget wget: missing URL Usage: wget [OPTION]... [URL]... Try wget --help' for more options. #  why when I type /bin/wget,system can find wget;when I type wget ,system can't find wget.you see that /bin already in my PATH. thanks. ### CompsciOverflow #### Help with linked list [migrated] As i have had no formal education with computer science, please pardon if my question seems trivial. I was reading on linked lists, and the only good source i could find was one from stanford cs-library. I was hoping to implement what i learned from it, and run it on my compiler. The program is to find the number of elements in a linked list of {1,2,3}. As i understood it this is what i did; #include <stdio.h> #include <stdlib.h> struct node { int data; struct node* next; }; int main(){ struct node* BuildOneTwoThree() { struct node* head = NULL; struct node* second = NULL; struct node* third = NULL; head = malloc(sizeof(struct node)); // allocate 3 nodes in the heap second = malloc(sizeof(struct node)); third = malloc(sizeof(struct node)); head->data = 1; // setup first node head->next = second; // note: pointer assignment rule second->data = 2; // setup second node second->next = third; third->data = 3; // setup third link third->next = NULL; return head; int Length(struct node* head) { struct node* current = head; int count = 0; while (current != NULL) { count++; current = current->next; } printf("%d",count); return count; } } return 0; }  , it is returning blank. I don't understand where i made a mistake, what am i doing wrong? ### Planet Clojure #### Using A ref As A Mutable Global Flag I have written a new, small Clojure program to compare this month’s and last month’s insurance report. This is similar to a project I did a year ago, except it involves one report our personnel department gets once a month, not two different reports. The program involves using Clojure’s jdbc interface, and is very much a typical database report that could have been written in Perl, or if the database had been Informix, in Informix 4GL. There’s nothing special about the program, except the code base was already in Clojure, and I wanted to keep the code base the same. The only roadblock I ran into was setting status from the result of certain difference tests between last and this month, like whether a record wasn’t there this month, last month, whether or not the insurance product or premium had changed, or if someone had gone from an active to a retired status. I tried figuring out a way to have a let binding contain return status from these different tests, so that these status values could be written into the report. After a while, I settled on a ref and dosync to set one global flag, so that later on in the program, had their been no errors, an appropriate message could be written to the file. I don’t know whether I crossed into the mutable dark force, but, for one, I’m not convinced that carefully used mutable variables are a bad thing, especially, if you’ve designed the rest of your program not to take these shortcuts, because of coding laziness. Can you tell I’ve absorbed guilt from Clojure’s being immutable? ### /r/emacs #### grunt.el - some glue to stick Emacs and Gruntfiles together ### StackOverflow #### jackson-module-scala: how to read subtype? With this jackson-module-scala wrapper object Json { private val ma = new ObjectMapper with ScalaObjectMapper ma.registerModule(DefaultScalaModule) ma.setSerializationInclusion(Include.NON_NULL) def jRead[T: Manifest](value: String): T = ma.readValue[T](value) def jWrite(value: Any) = ma.writer.writeValueAsString(value) def jNode(value: String) = ma.readTree(value) }  I try to read subtype (it is just simplified use case without real work): object TestJTrait extends App { trait T1 object TestJ { def apply[X <: T1: Manifest](s: String): X = jRead[X](s) } case class C1(i: Int) extends T1 TestJ(jWrite(C1(42))) }  But this attempt results in the error: Exception in thread "main" com.fasterxml.jackson.databind.JsonMappingException: Can not instantiate abstract type [simple type, class scala.runtime.Nothing$] (need to add/enable type information?)
at [Source: {"i":42}; line: 1, column: 2]
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
at com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:73)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:124)

### High Scalability

#### Part 2: The Cloud Does Equal High performance

This a guest post by Anshu Prateek, Tech Lead, DevOps at Aerospike and Rajkumar Iyer, Member of the Technical Staff at Aerospike.

In our first post we busted the myth that cloud != high performance and outlined the steps to 1 Million TPS (100% reads in RAM) on 1 Amazon EC2 instance for just $1.68/hr. In this post we evaluate the performance of 4 Amazon instances when running a 4 node Aerospike cluster in RAM with 5 different read/write workloads and show that the r3.2xlarge instance delivers the best price/performance. Several reports have already documented the performance of distributed NoSQL databases on virtual and bare metal cloud infrastructures: ### StackOverflow #### Exception when trying to refresh Clojure code in cider I am using clojure in Emacs with cider and the cider repl (0.7.0). This is pretty fine, but whenever I run cider-referesh (or hit C-c C-x), I get an exception: ClassNotFoundException clojure.tools.namespace.repl java.net.URLClassLoader$1.run (URLClassLoader.java:372)

1. Unhandled java.lang.ClassNotFoundException
clojure.tools.namespace.repl

URLClassLoader.java:  372  java.net.URLClassLoader$1/run URLClassLoader.java: 361 java.net.URLClassLoader$1/run
AccessController.java:   -2  java.security.AccessController/doPrivileged
Class.java:   -2  java.lang.Class/forName0
Class.java:  340  java.lang.Class/forName
RT.java: 2065  clojure.lang.RT/classForName
Compiler.java:  978  clojure.lang.Compiler$HostExpr/maybeClass Compiler.java: 756 clojure.lang.Compiler$HostExpr/access$400 Compiler.java: 6583 clojure.lang.Compiler/macroexpand1 Compiler.java: 6613 clojure.lang.Compiler/macroexpand Compiler.java: 6687 clojure.lang.Compiler/eval Compiler.java: 6666 clojure.lang.Compiler/eval core.clj: 2927 clojure.core/eval main.clj: 239 clojure.main/repl/read-eval-print/fn main.clj: 239 clojure.main/repl/read-eval-print main.clj: 257 clojure.main/repl/fn main.clj: 257 clojure.main/repl RestFn.java: 1096 clojure.lang.RestFn/invoke interruptible_eval.clj: 56 clojure.tools.nrepl.middleware.interruptible-eval/evaluate/fn AFn.java: 152 clojure.lang.AFn/applyToHelper AFn.java: 144 clojure.lang.AFn/applyTo core.clj: 624 clojure.core/apply core.clj: 1862 clojure.core/with-bindings* RestFn.java: 425 clojure.lang.RestFn/invoke interruptible_eval.clj: 41 clojure.tools.nrepl.middleware.interruptible-eval/evaluate interruptible_eval.clj: 171 clojure.tools.nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn core.clj: 2402 clojure.core/comp/fn interruptible_eval.clj: 138 clojure.tools.nrepl.middleware.interruptible-eval/run-next/fn AFn.java: 22 clojure.lang.AFn/run ThreadPoolExecutor.java: 1142 java.util.concurrent.ThreadPoolExecutor/runWorker ThreadPoolExecutor.java: 617 java.util.concurrent.ThreadPoolExecutor$Worker/run


What is the reason for this, and how can I fix it?

#### Using Free with a non-functor in Scalaz

In the "FP in Scala" book there's this approach for using an ADT S as an abstract instruction set like

sealed trait Console[_]
case class PrintLine(msg: String) extends Console[Unit]


and composing them with a Free[S, A] where S would later be translated to an IO monad. Can this be done with Scalaz's Free type? It seems that all run methods require a functor instance for S.

### CompsciOverflow

#### SLR(1) and LALR(1) in given grammar [on hold]

Infact i ran into multiple choice question in recent exam on Compiler Course. Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?

a) just T1 has meaning for G.

b) T1 and T2 has not any difference.

c) total Number of non-error element in T1 is lower than T2

d) total Number of error element in T1 is lower than T2

My solution:

if grammar is SLR(1) so is LALR(1) then table (size) of LALR(1) is the same with SLR(1). so (b) is correct.

#### Grammars: is there some connection between non-terminals $S$ and $S'$?

Given a grammar such as the following, does $S'$ have some special meaning or does it just denote another non-terminal like $B$, $A$, $P$, $Q$ etc.?

\begin{align*} S &\to aBS'\\ B &\to b\\ S'&\to bA \end{align*}

### /r/compsci

#### Ask CompSci: How do you stay up to date on the latest research/happenings in the comp sci world?

So I just got done reading a few Simon Singh books, and I noticed that everyone he talks about got ideas from reading a paper that landed on their desk written by someone else in the field. So my question is, where do you go to get this info? Where do you go to keep tabs on what's happening in comp sci?

submitted by Killobyte

### QuantOverflow

#### How to design a custom equity backtester?

I was thinking about writing my own backtester and I realize I have to make some assumptions. So I was hoping I could post what I am planning on doing and hopefully some of you can give me some ideas on how to make it better (I'm sure there is a lot that can be improved).

First of all, my strategy involves holding stocks for usually some days, I am not doing (probably any) intra-day trading.

Lastly I would include the cost of commission in the trade. I would neglect any effect my trade would have on the market. Is there any rough guideline using volume to estimate how much you would have to buy/sell to have an effect?

I would also simulate stop-loss sell orders and they, too, would be executed at the next bar low after the price passed the threshold.

That's it, it will be pretty simple to implement. I am just hoping to make it conservative so it can give me some insight into how well my program works.

Any thoughts or criticisms about this program would be greatly appreciated. I am new at this and I am sure there are some details I am missing.

### Lambda the Ultimate Forum

#### Function Types and Dylan 2016

Function Types and Dylan 2016

Moving towards Dylan 2016, the Dylan community would like to address some weaknesses in the language specification and what can be readily expressed in Dylan code. In this post, we'll look at function types as well as provide a brief introduction to some details of the type system implementation within the Open Dylan compiler.

One of the big holes in the Dylan type system is the inability to specify function types. What this means is that you can only say that a value is of type and can't indicate anything about the desired signature, types of arguments, return values, etc. This is unfortunate for a number of reasons:

• Poor static type safety. The compiler can verify very little involving a function value. It can't warn when the wrong number of arguments or the wrong types of arguments are passed.
• Less clear interfaces. The type signature of a function must be documented clearly rather than being expressed clearly within the code.
• Optimization is more difficult. Since the compiler can't perform as many checks at compile time, more checks need to be performed at run-time, which limits the amount of optimization that can be performed by the compiler and restricts the generated code to using slower paths for function invocation.

In addition, function types may allow us to improve type inference. This is something that people have long wanted to have in the Dylan language.

### StackOverflow

#### How do I create a case-insensitive lexer

I'm trying to create a SQL lexer (well, a full parser but you have to start somewhere) and I'm not sure how to proceed. I want to write something like this:

def nextToken(input: List[Char]) = input match {
case 'S'::'E'::'L'::'E'::'C'::'T'::tail => (SELECT, tail)
case _ => ??? // etc.
}


But SQL is case insensitive. I could uppercase all the characters in input, but that would also uppercase strings. What I really need is a way to do case insensitive comparisons, and then be left with the correct tail (remainder List[Char] after matching a token). Is there a way to do this easily in Scala 2.10.x?

#### Scala API: view IndexedSeq[T] as Map[Int, T]

Simply speaking, is there a way in Scala collections library which would provide map-like view for indexed sequence, using indices as keys?

I have following trait (limit on 16 elems is intended and enforced by external API)

trait Container[T >: Null]
{
private val ElemsLimit = 16 // block's meta is 4-bit
private var table: Seq[T] = null

protected def register(elems: (Int, T)*)(implicit manifest: Manifest[T]) =
{
if (table != null)
val array = Array.fill[T](ElemsLimit)(null)
elems foreach { el => array(el._1) = el._2 }
table = array
}

def elem(idx: Int) = table(idx)
def allElems = table.zipWithIndex.filter(_  != null) // some mapView instead of zipWithIndex
}


I know that I can construct immutable map, and frankly speaking it will work just fine for my purposes. I can also write MapView for this myself. Though I'm really interested if there's existing solution somewhere. Or, maybe, there's array-backed immutable map which I missed.

Thanks.

#### Is it possible to have macro annotation parameters (and how to get them)?

I have some data source that requires wrapping operations in transactions, which have 2 possible outcomes: success and failure. This approach introduces quite a lot of boilerplate code. What I'd like to do is something like this (the same remains true for failures (something like @txFailure maybe)):

@txSuccess(dataSource)
def writeData(data: Data*) {
dataSource.write(data)
}


Where @txSuccess is a macro annotation that, after processing will result in this:

def writeData(data: Data*) {
val tx = dataSource.openTransaction()

dataSource.write(data)

tx.success()
tx.close()
}


As you can see, this approach can prove quite useful, since in this example 75% of code can be eliminated due to it being boilerplate.

Is that possible? If yes, can you give me a nudge in the right direction? If no, what can you recommend in order to achieve something like that?

### TheoryOverflow

#### Is this NP-Hard or does a known optimal polynomial time solution exist?

Suppose we have 10 items, each of a different cost

Items: {1,2,3,4,5,6,7,8,9,10}

Cost: {2,5,1,1,5,1,1,3,4,10}

and 3 customers

{A,B,C}.

Each customer has a requirement for a set of items. He will either buy all the items in the set or none. There's just one copy of each item. For example, if

A requires {1,2,4}, Total money earned = 2+5+1= 8

B requires {2,5,10,3}, Total money earned = 5+5+10+1 = 21

C requires {3,6,7,8,9}, Total money earned = 1+1+1+3+4 = 10

So, if we sell A his items, B won't purchase from us because we don't have item 2 with us anymore. We wish to earn maximum money. By selling B, we can't sell to A and C. So, if we sell A and C, we earn 18. But just by selling B, we earn more, i.e., 21.

We thought of a bitmasking solution, which is exponential in order though and only feasible for small set of items. And other heuristic solutions which gave us non-optimal answers. But after multiple tries we couldn't really come up with any fast optimal solution.

We were wondering if this is a known problem, or similar to any problem? Or is this problem NP Hard and thus a polynomial optimal solution doesn't exist and we're trying to achieve something that's not possible?

Also, does the problem change if all the items cost the same?

Thanks a lot.

### StackOverflow

#### Use this in a generated macro method

This is a follow-up on my previous question.

I would like something like the code below to work. I want to be able to generate a macro-generated method:

case class Cat()

test[Cat].method(1)


Where the implementation of the generated method itself is using a macro (a "vampire" method):

// macro call
def test[T] = macro testImpl[T]

// macro implementation
def testImpl[T : c.WeakTypeTag](c: Context): c.Expr[Any] = {
import c.universe._
val className = newTypeName("Test")

// IS IT POSSIBLE TO CALL otherMethod HERE?
val bodyInstance = q"(p: Int) => otherMethod(p * 2)"

c.Expr { q"""
class $className { protected val aValue = 1 @body($bodyInstance)
def method(p: Int): Int = macro methodImpl[Int]

def otherMethod(p: Int): Int = p
}
new $className {} """} } // method implementation def methodImpl[F](c: Context)(p: c.Expr[F]): c.Expr[F] = { import c.universe._ val field = c.macroApplication.symbol val bodyAnnotation = field.annotations.filter(_.tpe <:< typeOf[body]).head c.Expr(q"${bodyAnnotation.scalaArgs.head}.apply(${p.tree.duplicate})") }  This code fails to compile with: [error] no-symbol does not have an owner last tree to typer: This(anonymous class$anonfun)
[error]               symbol: anonymous class $anonfun (flags: final <synthetic>) [error] symbol definition: final class$anonfun extends AbstractFunction1$mcII$sp with Serializable
[error]                  tpe: examples.MacroMatcherSpec.Test.$anonfun.type [error] symbol owners: anonymous class$anonfun -> value <local Test> -> class Test -> method e1 -> class MacroMatcherSpec -> package examples
[error]       context owners: value $outer -> anonymous class$anonfun -> value <local Test> -> class Test -> method e1 -> class MacroMatcherSpec -> package examples
[error]
[error] == Enclosing template or block ==
[error]
[error] DefDef( // val $outer(): Test.this.type [error] <method> <synthetic> <stable> <expandedname> [error] "examples$MacroMatcherSpec$Test$$anonfun$$$outer"
[error]   []
[error]   List(Nil)
[error]   <tpt> // tree.tpe=Any
[error]   $anonfun.this."$outer " // private[this] val $outer: Test.this.type, tree.tpe=Test.this.type [error] )  I am really bad at deciphering what this means but I suspect that it is related to the fact that I can't reference this.otherMethod in the body of the vampire method. Is there a way to do that? If this works, my next step will be to have this kind of implementation for otherMethod: def otherMethod(p: Int) = new$className {
override protected val aValue = p
}


#### Install Apache Spark on Windows 8

Can some kind, generous person please post a step-by-step guide to install and run Apache Spark on Windows 8? And be very detailed with each step, please.

I am a business analyst and have no command line experience. But I was able to install and run several Spark jobs on my wife's Mac at the scala> command prompt -- I can easily use SBT, but I could not get everything configured correctly on Windows 8. (Yes I will buy a Mac next time).

Thank you.

p.s. I tried to follow the video guide at: https://spark.apache.org/screencasts/1-first-steps-with-spark.html but that is for Mac.

STEP BY STEP GUIDE It may be very helpful for other users as well if somebody can post a step by step guide, since instructions like "run blah blah" are hard to follow. I need each exact step please.

I am not posting the steps I have taken and the errors I get, because what I really want is a step-by-step guide starting from the very beginning. Thank you.

### StackOverflow

#### What's the right ZMQ pattern?

I would like to implement a system where:

• there is one server
• there are many clients
• the clients send requests to the server.

Obviously, the REQ/REP pattern would be the right one to use. But:

• I want the clients to be able to send multiple requests, without waiting for the response.
• I want the server to process multiple requests in parallel.

So as far as I know, the correct pattern for this would be DEALER/ROUTER, is this correct? Or is there a better approach?

The client should be able to send many requests and should receive the corresponding responses asynchronously.

### StackOverflow

#### Using Scalaz stream, how to convert A => Task[B] to Process1[A,B]

I am encoding a http request to a remote server as a function which takes an id and yields a Task[JValue].

I would like to convert that function into a Process1, to simplify my program (By simplify, i mean use Processes as building blocks as much as possible)

I would like to convert the function

    reqDoc(id:A):Task[B]


(where A is the type of the Id, and B is the type of the response) into

    reqDoc:Process1[A,B]


#### What is Hindley-Milner?

I encountered this term Hindley-Milner, and I'm not sure if grasp what it means.

But there is no single entry for this term in wikipedia where usually offers me a concise explanation.
Note - one has now been added

What is it?
What languages and tools implement or use it?

#### Ways for heartbeat message

I am trying to set a heartbeat over a network, i.e. having an actor send a message to the network on a fixed period of time. I would like to know if you have any better solution than the one I used below as I feel is pretty ugly, considering synchronisation contraints.

import akka.actor._
import akka.actor.Actor
import akka.actor.Props
import akka.actor.ScalaActorRef
import akka.pattern.gracefulStop
import akka.util._
import java.util.Calendar
import java.util.concurrent._
import java.text.SimpleDateFormat
import scala.Array._
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global

sealed trait Message
case class Information() extends Message//does really need to be here?
case class StartMessage() extends Message
case class HeartbeatMessage() extends Message
case class StopMessage() extends Message
case class FrequencyChangeMessage(
f: Int
) extends Message

class Gps extends Actor {
override def preStart() {
}
case "beat" =>
//TODO
case _      =>
println("gps: wut?")
}
}

class Cadencer(p3riod: Int) extends Actor {
var period: Int = _
var stop: Boolean = _
override def preStart() {
period = p3riod
stop = false
context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
}
case StartMessage =>
stop = false
context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
case HeartbeatMessage =>
if (false == stop) {
context.system.scheduler.scheduleOnce(0 milliseconds, context.parent, "beat")
context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
}
case StopMessage =>
stop = true
case FrequencyChangeMessage(f) =>
period = f
case _  =>
println("wut?\n")
//throw exception
}
}

object main extends App {
val system = akka.actor.ActorSystem("mySystem")
val gps = system.actorOf(Props[Gps], name = "gps")
}


What I called cadencer here sends to a target actor and to itself an HeartbeatMessage ; to itself to transmit the order to resend one after a given amount of time, and thus going on with the process till a StopMessage (flipping the stop to true). Good?

Is even having a separated actor efficient rather than having it within a greater one?

#### Restricting a trait to objects?

Is there a way to restrict a trait so that it can only be mixed into objects? E.g.

trait OnlyForObjects {
this: ... =>
}

object Foo extends OnlyForObjects  // --> OK

class Bar extends OnlyForObjects   // --> compile error


### StackOverflow

#### How to convert a generic HList to a List

I have these:

trait A[T]
class X
class Y

object B {
def method[H :< HList](h: H) = h.toList[A[_]]
}


Parameter h of method will always be a HList of A[T], like new A[X] :: new A[Y] :: HNil.

I would like to convert the HList to a List[A[_]].

How can I get this with generic code, because trait HList doesn't have the toList method()?

#### How do I submit a form for a model that contains a list of other models with Salat & Play framework?

I have a model. It contains a list of another model:

case class Account(
_id: ObjectId = new ObjectId,
name: String,
campaigns: List[Campaign]
)

case class Campaign(
_id: ObjectId = new ObjectId,
name: String
)


I have a form and action for display and creating new Accounts:

  val accountForm = Form(
mapping(
"id" -> ignored(new ObjectId),
"name" -> nonEmptyText,
"campaigns" -> list(
mapping(
"id" -> ignored(new ObjectId),
"name" -> nonEmptyText
)(Campaign.apply)(Campaign.unapply)
)
)(Account.apply)(Account.unapply)
)

def accounts = Action {
Ok(views.html.accounts(AccountObject.all(), accountForm, CampaignObject.all()))
}

def newAccount = Action {
implicit request =>
accountForm.bindFromRequest.fold(
account => {
AccountObject.create(account)
Redirect(routes.AccountController.accounts)
}
)
}


Finally, here is my view for Accounts:

@(accounts: List[models.mongodb.Account], account_form: Form[models.mongodb.Account], campaign_list: List[models.mongodb.Campaign])

@import helper._
@args(args: (Symbol, Any)*) = @{
args
}
@main("Account List") {
<h1>@accounts.size Account(s)</h1>
<ul>
@accounts.map { account =>
<li>
@account.name
</li>
}
</ul>
@form(routes.AccountController.newAccount()) {
<fieldset>
@inputText(account_form("name"), '_label -> "Account Name")
@select(
account_form("campaigns"),
options(campaign_list.map(x => x.name):List[String]),
args(
'class -> "chosen-select",
'multiple -> "multiple",
'style -> "width:350px;"
): _*
)
<input type="submit" value="Create">
</fieldset>
}
}


The problem is when I submit this form, it submits it with a list of strings for the campaigns field. This gives me a 400 error when I post the form submission.

I would like to either submit submit the form with a list of campaigns instead of strings or have the form submit with a list of strings, then process the strings into a list of campaigns in my controller. Which way would be better and how would I do it? Thanks!

#### Joda Time: how to parse time and set default date as today's date

I have to parse dates in formats:

HH:mm
dd MMM
dd MMM yyyy


I've managed to handle the last two of them:

val dateParsers = Array(
DateTimeFormat.forPattern("dd MMM").getParser,
DateTimeFormat.forPattern("dd MMM yyyy").getParser,
ISODateTimeFormat.timeParser().getParser
)

val formatter = new DateTimeFormatterBuilder().append(null, dateParsers).toFormatter.withZoneUTC
DateTime.parse(updatedString, formatter.withDefaultYear(currentYear).withLocale(ruLocale))


Everything is ok with dd MMM and dd MMM yyyy, but when I'm trying parse time like 05:40 I'm getting 01-01-1970 date instead of today's date. What is the simplest method to set default date as today's date in parser?

### TheoryOverflow

#### LLAR(1) and SLR(1) tricky question [on hold]

Infact i ran into multiple choice question in recent exam on Compiler Course. Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?

a) just T1 has meaning for G.

b) T1 and T2 has not any difference.

c) total Number of non-error element in T1 is lower than T2

d) total Number of error element in T1 is lower than T2 (because of lower space in table)

My solution:

if grammar is SLR(1) so is LALR(1) then table (size) of LALR(1) is the same with SLR(1). so (b) is correct.

### UnixOverflow

#### USB audio only outputs white noise

I am running OpenBSD/i386 5.1 on a 5 year old laptop. The speakers and headphone port work, but the headphone port is a little loose so I am trying to install an external USB sound card (Fiio E17 USB DAC). No problems using it on Windows.

The device is detected and I created a node for it in /dev with sh /dev/MAKEDEV audio1, then linked the rest of the devices to point to the new sound card. So far so good, I am able to run cat /dev/urandom > /dev/audio and I hear white noise. However, I am not able to run any other audio through it.

My tail /var/log/messages after plugging the device in:

Aug 30 10:03:55 s96j /bsd: uhidev0 at uhub1
Aug 30 10:03:55 s96j /bsd:  port 1 configuration 1 interface 0 "FiiO FiiO USB DAC-E17" rev 1.10/0.01 addr 2
Aug 30 10:03:55 s96j /bsd: uhidev0: iclass 3/0
Aug 30 10:03:55 s96j /bsd: uhid0 at uhidev0: input=18, output=27, feature=0
Aug 30 10:03:55 s96j /bsd: uaudio0 at uhub1
Aug 30 10:03:55 s96j /bsd:  port 1 configuration 1 interface 1 "FiiO FiiO USB DAC-E17" rev 1.10/0.01 addr 2
Aug 30 10:03:56 s96j /bsd: uaudio0: ignored setting with type 8193 format
Aug 30 10:03:56 s96j /bsd: uaudio0: audio rev 1.00, 2 mixer controls
Aug 30 10:03:56 s96j /bsd: audio1 at uaudio0


My list of relevant devices from /dev:

lrwxr-xr-x  1 root  wheel         6 Aug 30 09:44 audio -> audio1
crw-rw-rw-  1 root  wheel   42, 128 Aug 30 10:07 audio0
crw-rw-rw-  1 root  wheel   42, 129 Aug 30 10:15 audio1
crw-rw-rw-  1 root  wheel   42, 130 Aug 30 06:40 audio2
lrwxr-xr-x  1 root  wheel         9 Aug 30 09:44 audioctl -> audioctl1
crw-rw-rw-  1 root  wheel   42, 192 Aug 30 06:40 audioctl0
crw-rw-rw-  1 root  wheel   42, 193 Aug 30 09:44 audioctl1
crw-rw-rw-  1 root  wheel   42, 194 Aug 30 06:40 audioctl2
lrwxr-xr-x  1 root  wheel         6 Aug 30 09:45 mixer -> mixer1
crw-rw-rw-  1 root  wheel   42,  16 Aug 30 06:40 mixer0
crw-rw-rw-  1 root  wheel   42,  17 Aug 30 09:44 mixer1
crw-rw-rw-  1 root  wheel   42,  18 Aug 30 06:40 mixer2
lrwxr-xr-x  1 root  wheel         6 Aug 30 09:45 sound -> sound1
crw-rw-rw-  1 root  wheel   42,   0 Aug 30 06:40 sound0
crw-rw-rw-  1 root  wheel   42,   1 Aug 30 09:44 sound1
crw-rw-rw-  1 root  wheel   42,   2 Aug 30 06:40 sound2


A simple test from the FAQ to determine if data is passing over the device:

# cat > /dev/audio < /dev/zero &
[1] 21098
# audioctl play.{seek,samples,errors}
play.seek=61712
play.samples=1146080
play.errors=0
# audioctl play.{seek,samples,errors}
play.seek=52896
play.samples=1542800
play.errors=0
# audioctl play.{seek,samples,errors}
play.seek=61712
play.samples=1957152
play.errors=0


My audioctl -a:

name=USB audio
version=
config=uaudio
encodings=slinear_le:16:2:1,slinear_le:24:3:1
properties=independent
full_duplex=0
fullduplex=0
blocksize=8816
hiwat=7
lowat=1
output_muted=0
monitor_gain=0
mode=
play.rate=44100
play.sample_rate=44100
play.channels=2
play.precision=16
play.bps=2
play.msb=1
play.encoding=slinear_le
play.gain=127
play.balance=32
play.port=0x0
play.avail_ports=0x0
play.seek=8816
play.samples=131988
play.eof=0
play.pause=0
play.error=1
play.waiting=0
play.open=0
play.active=0
play.buffer_size=65536
play.block_size=8816
play.errors=2267
record.rate=44100
record.sample_rate=44100
record.channels=2
record.precision=16
record.bps=2
record.msb=1
record.encoding=slinear_le
record.gain=127
record.balance=32
record.port=0x0
record.avail_ports=0x0
record.seek=0
record.samples=0
record.eof=0
record.pause=0
record.error=0
record.waiting=0
record.open=0
record.active=0
record.buffer_size=65536
record.block_size=8816
record.errors=0


And lastly, my mixerctl -a:

outputs.aux.mute=off
outputs.aux=255,255


Again I am able to cat /dev/urandom > /dev/audio and get white noise, but nothing else I've tried lets me output other sounds or music. I also tried cat sample.au > /dev/audio but that was silent as well.

Any suggestions or help would be greatly appreciated! Worst case, hopefully someone can use the steps I outlined here to troubleshoot their own sound devices.

### StackOverflow

#### Save Gatling results as a JSON or xml file

Recently I have started using Gatling but to integrate Gatling with Jenkins I need the out put in the JSON or xml format. How can I achieve this?

#### Scala Dependency injection

I'm not asking for opinion here but facts

I'm trying to pick a new DI. I have had some experience with Guice. Overall i would said that one advantage of it, is that when from scala you need to integrate with Java, Guice does the job. So for interoperability it's a clear plus.

If we put aside this interoperability issue, can anyone, give me a brief comparison between

I'm still new at understanding scaldi. One thing that i found surprising is the idea of having to move around around your injector trough an implicit parameter. I never almost did that in guice. I either wire up everything in my main, or use assisted injection, hence passing around the factories to the class that needs some specific instance.

If someone could further elaborate on that design choice i would appreciate.

Many thanks,

-M-

Here is something strange i found with macWire:

trait Interf {
def name:String
}

class InterfImpl(val name:String) extends Interf

trait AModule {

import com.softwaremill.macwire.MacwireMacros._

//lazy val aName: String = "aName"
lazy val theInterf: Interf = wire[InterfImpl]

}

object injector extends AModule

println(injector.theInterf.name)


I get a strange value. I don't know what macWire is doing at that level. I though it could make a compile error or something. Indeed, i did not give any String value.

#### mongodb database with scala play 2.0 tutorial

Is there a tutorial how I can use mongodb database with scala play 2.0?

On the official website (playframework.org) there seems to be only the SQL example.

#### scala REPL - scala is not installed

I have installed sbt as told in following statements:

2. Unpack the archive to a directory of your choice
3. Add the bin/ directory to the PATH environment variable. Open the file ~/.bashrc in an editor (create it if it doesn’t exist) and add the following line export PATH=/PATH/TO/YOUR/sbt/bin:$PATH But, when I type scala in terminal, it says scala in not installed! Though, sbt -h works fine. How to resolve the issue? #### leiningen - how to add dependencies for local jars? I want to use leiningen to build and develop my clojure project. Is there a way to modify project.clj to tell it to pick some jars from local directories? I have some proprietary jars that cannot be uploaded to public repos. Also, can leiningen be used to maintain a "lib" directory for clojure projects? If a bunch of my clojure projects share the same jars, I don't want to maintain a separate copy for each of them. Thanks ### Planet Emacsen #### Irreal: A Followup on Leaving Gmail In my post about Chen Bin’s guide to using Gnus with Gmail, I mentioned that in my own quest to move my email operations to Emacs, I was looking at three packages: mew, mu4e, and gnus. In the comments, I got a couple more recommendations. David recommended Wanderlust as a mature and full featured solution. Sam recommended that I look at Notmuch. Both useful additions to my list and I’m glad to have them even though they complicate my decision making. Sam also provided a link to a post by the invaluable Christopher Wellons that compares Notmuch and mu4e. Wellons’ post is interesting because it’s principally about moving off of Gmail and onto his own server that he would access using an Emacs-based email client. I found this particularly interesting because that’s my end goal: no email middlemen that offer the NSA and others easy access to my email. If you’re OK with Gmail but would just like to compose messages in Emacs, Artur Malabarba has got you covered with his gmail-message-mode that lets you hot key from your browser to Emacs when you want to compose an email. Malabarba’s got it working with Chrome, Firefox, and Conkeror. He uses Markdown to compose messages but it could probably be patched to use Org-mode fairly easily. In any event if you’re interested in integrating Gmail and Emacs, give Malabarba’s post a look. ### StackOverflow #### What's wrong with my implementation of the memoize function? Theoretically, memoization applied to a referentially transparent like the Fibonacci should speed things up considerably. Function.prototype.memoize = function () { var cache = {}, slice = Array.prototype.slice, originalFunction = this; return function () { var key = slice.call(arguments); if (key in cache) { return cache[key]; } else { return (cache[key] = originalFunction.apply(this, key)); } }; }; var Y = function (f) { return function (x) { return f(Y(f))(x); }; }; var almostFibonacci = function (f) { return function (n) { return n == 0 || n == 1 ? n : f(n - 1) + f(n - 2); }; }; console.log(Y(almostFibonacci).memoize()(50));  The above implementation does not. Computing the 50th Fibonacci number which ought to be quite doable in case of a memoized version seems impossible in this case. The scratchpad on Firefox keeps dying when I try. I have a feeling that maybe the closure isn't working. Any idea what I'm doing wrong? http://jsfiddle.net/x18ewhsu/ ### /r/systems #### "Network Stack Specialization for Performance" [PDF, 2014] ### /r/netsec #### CSRF vulnerabilities in Disqus WordPress Plugin ### StackOverflow #### Multiplying numbers on horizontal, vertial, and diagonal lines I'm currently working on a project Euler problem (www.projecteuler.net) for fun but have hit a stumbling block. One of the problem provides a 20x20 grid of numbers and asks for the greatest product of 4 numbers on a straight line. This line can be either horizontal, vertical, or diagonal. Using a procedural language I'd have no problem solving this, but part of my motivation for doing these problems in the first place is to gain more experience and learn more Haskell. As of right now I'm reading in the grid and converting it to a list of list of ints, eg -- [[Int]]. This makes the horizontal multiplication trivial, and by transposing this grid the vertical also becomes trivial. The diagonal is what is giving me trouble. I've thought of a few ways where I could use explicit array slicing or indexing, to get a solution, but it seems overly complicated and hackey. I believe there is probably an elegant, functional solution here, and I'd love to hear what others can come up with. ### QuantOverflow #### How can theta be so large on this option? The AAPL Sep 95 put currently has a theta of -.21. The put midpoint is .84. 84/21 = 4 days. However, the put has nearly a month before expiration, at which time it will be zero. Not 4 days from now. What am I doing wrong or missing in the above calculation? ### StackOverflow #### Spark throwing Out of Memory error I have a single test node with 8 GB ram on which I am loading barely 10 MB of data(from csv files) into Cassandra(on the same node itself). Im trying to process this data using spark(running on the same node). Please note that for SPARK_MEM, Im allocating 1 GB of RAM and SPARK_WORKER_MEMORY I'm allocating the same. The allocation of any extra amount of memory results in spark throwing a "Check if all workers are registered and have sufficient memory error", which is more often than not indicative of Spark trying to look for extra memory(as per SPARK_MEM and SPARK_WORKER_MEMORY properties) and coming up short. When I try to load and process all data in the Cassandra table using spark context object, I'm getting an error during processing. So, I'm trying to use a looping mechanism to read chunks of data at a time from one table, process them and put them in another table. My source code has the following structure var data=sc.cassandraTable("keyspacename","tablename").where("value=?",1) data.map(x=>tranformFunction(x)).saveToCassandra("keyspacename","tablename") for(i<-2 to 50000){ data=sc.cassandraTable("keyspacename","tablename").where("value=?",i) data.map(x=>tranformFunction(x)).saveToCassandra("keyspacename","tablename") }  Now, this works for a while, for around 200 loops, and then this throws an error: java.lang.OutOfMemoryError: unable to create a new native thread. I've got two questions: Is this the right way to deal with data? How can processing just 10 MB of data do this to a cluster?  ### DataTau #### Competitive Machine Learning: A How-to ### CompsciOverflow #### How to simulate a die given a fair coin Suppose that you're given a fair coin and you would like to simulate the probability distribution of repeatedly flipping a fair (six-sided) die. My initial idea is that we need to choose appropriate integers$k,m$, such that$2^k = 6m$. So after flipping the coin$k$times, we map the number encoded by the k-length bitstring to outputs of the die by dividing the range$[0,2^k-1]$into 6 intervals each of length$m$. However, this is not possible, since$2^k$has two as its only prime factor but the prime factors of$6m$include three. There should be some other simple way of doing this, right? ### QuantOverflow #### Stock Price Evolution Equation Clarifications [on hold] What is the dimension of Δt? Is it second [s]? What are the expected return and the expected volatility? How can I calculate them for any given stock? Should I use the return and volatility values for the moment in which I need to simulate the price? Is ε a random variable | ε ∈ [0,1) with an expected value of 1/2? • S(0): The stock price today. • S(Δt): The stock price at a (small) time into the future. • Δt: A small increment of time. • μ: The expected return. • σ: The expected volatility. • ε: A (random) number sampled from a standard normal distribution. ### /r/compsci #### Solving bridge and torch puzzle with dynamic programming I'm trying to solve a bridge and torch like problem with dynamic programming. More about this problem can be found on wikipedia (http://en.wikipedia.org/wiki/Bridge_and_torch_problem). The story goes like this: Four people come to a river in the night. There is a narrow bridge, but it can only hold two people at a time. They have one torch and, because it's night, the torch has to be used when crossing the bridge. Person A can cross the bridge in one minute, B in two minutes, C in five minutes, and D in eight minutes. When two people cross the bridge together, they must move at the slower person's pace. The question is, can they all get across the bridge in 15 minutes or less? Now I've managed to solve the problem using a graph, but I don't see how I can solve this type of problem using dynamic programming. How do you split up the problem in to subproblems? And how do the solutions of the subproblems lead to the optimal solution of the whole problem? What are the stages and states? Does somebody know how to solve this using DP? And maybe tell me how to solve this puzzle with Java? submitted by Lesso_ [link] [17 comments] ### StackOverflow #### How to write it with for-comprehension instead of nested flatMap calls? I am trying to translate examples from this article to Scala. So I defined a monadic class Parser with success as return function. class Parser[A](val run: String => List[(A, String)]) { def apply(s: String) = run(s) def flatMap[B](f: A => Parser[B]): Parser[B] = { val runB = {s: String => for((r, rest) <- run(s); rb <- f(r)(rest)) yield rb} new Parser(runB) } } def success[A](a:A):Parser[A] = { val run = {s:String => List((a, s))} new Parser(run) }  I defined also a new parser item to return the 1st character of the input. def item(): Parser[Char] = { val run = {s: String => if (s.isEmpty) Nil else List((s.head, s.tail))} new Parser(run) }  Now I am defining a new parser item12: Parser[(Char, Char)] to return a pair of 1st and 2nd characters  def item12():Parser[(Char, Char)] = item().flatMap(a => (item().flatMap(b => success(a, b))))  I would like to write it with for-comprehension instead of nested flatMap calls. I understand that I need to define a map method for the Parser. How would you do that ? #### Akka Dead Letters with Ask Pattern I apologize in advance if this seems at all confusing, as I'm dumping quite a bit here. Basically, I have a small service grabbing some Json, parsing and extracting it to case class(es), then writing it to a database. This service needs to run on a schedule, which is being handled well by an Akka scheduler. My database doesn't like when Slick tries to ask for a new AutoInc id at the same time, so I built in an Await.result to block that from happening. All of this works quite well, but my issue starts here: there are 7 of these services running, so I would like to block each one using a similar Await.result system. Every time I try to send the end time of the request back as a response (at the end of the else block), it gets sent to dead letters instead of to the Distributor. Basically: why does sender ! time go to dead letters and not to Distributor. This is a long question for a simple problem, but that's how development goes... ClickActor.scala  import java.text.SimpleDateFormat import java.util.Date import Message._ import akka.actor.{Actor, ActorLogging, Props} import akka.util.Timeout import com.typesafe.config.ConfigFactory import net.liftweb.json._ import spray.client.pipelining._ import spray.http.{BasicHttpCredentials, HttpRequest, HttpResponse, Uri} import akka.pattern.ask import scala.concurrent.{Await, Future} import scala.concurrent.duration._ case class ClickData(recipient : String, geolocation : Geolocation, tags : Array[String], url : String, timestamp : Double, campaigns : Array[String], user-variables : JObject, ip : String, client-info : ClientInfo, message : ClickedMessage, event : String) case class Geolocation(city : String, region : String, country : String) case class ClientInfo(client-name: String, client-os: String, user-agent: String, device-type: String, client-type: String) case class ClickedMessage(headers : ClickHeaders) case class ClickHeaders(message-id : String) class ClickActor extends Actor with ActorLogging{ implicit val formats = DefaultFormats implicit val timeout = new Timeout(3 minutes) import context.dispatcher val con = ConfigFactory.load("connection.conf") val countries = ConfigFactory.load("country.conf") val regions = ConfigFactory.load("region.conf") val df = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss -0000") var time = System.currentTimeMillis() var begin = new Date(time - (12 hours).toMillis) var end = new Date(time) val pipeline : HttpRequest => Future[HttpResponse] = ( addCredentials(BasicHttpCredentials("api", con.getString("mailgun.key"))) ~> sendReceive ) def get(lastrun : Long): Future[String] = { if(lastrun != 0) { begin = new Date(lastrun) end = new Date(time) } val uri = Uri(con.getString("mailgun.uri")) withQuery("begin" -> df.format(begin), "end" -> df.format(end), "ascending" -> "yes", "limit" -> "100", "pretty" -> "yes", "event" -> "clicked") val request = Get(uri) val futureResponse = pipeline(request) return futureResponse.map(_.entity.asString) } def receive = { case lastrun : Long => { val start = System.currentTimeMillis() val responseFuture = get(lastrun) responseFuture.onSuccess { case payload: String => val json = parse(payload) //println(pretty(render(json))) val elements = (json \\ "items").children if (elements.length == 0) { log.info("[ClickActor: " + this.hashCode() + "] did not find new events between " + begin.toString + " and " + end.toString) sender ! time context.stop(self) } else { for (item <- elements) { val data = item.extract[ClickData] var tags = "" if (data.tags.length != 0) { for (tag <- data.tags) tags += (tag + ", ") } var campaigns = "" if (data.campaigns.length != 0) { for (campaign <- data.campaigns) campaigns += (campaign + ", ") } val timestamp = (data.timestamp * 1000).toLong val msg = new ClickMessage( data.recipient, data.geolocation.city, regions.getString(data.geolocation.country + "." + data.geolocation.region), countries.getString(data.geolocation.country), tags, data.url, timestamp, campaigns, data.ip, data.client-info.client-name, data.client-info.client-os, data.client-info.user-agent, data.client-info.device-type, data.client-info.client-type, data.message.headers.message-id, data.event, compactRender(item)) val csqla = context.actorOf(Props[ClickSQLActor]) val future = csqla.ask(msg) val result = Await.result(future, timeout.duration).asInstanceOf[Int] if (result == 1) { log.error("[ClickSQLActor: " + csqla.hashCode() + "] shutting down due to lack of system environment variables") context.stop(csqla) } else if(result == 0) { log.info("[ClickSQLActor: " + csqla.hashCode() + "] successfully wrote to the DB") } } sender ! time log.info("[ClickActor: " + this.hashCode() + "] processed |" + elements.length + "| new events in " + (System.currentTimeMillis() - start) + " ms") } } } } }  Distributor.scala import akka.actor.{Props, ActorSystem} import akka.event.Logging import akka.util.Timeout import akka.pattern.ask import scala.concurrent.duration._ import scala.concurrent.Await class Distributor { implicit val timeout = new Timeout(10 minutes) var lastClick : Long = 0 def distribute(system : ActorSystem) = { val log = Logging(system, getClass) val clickFuture = (system.actorOf(Props[ClickActor]) ? lastClick) lastClick = Await.result(clickFuture, timeout.duration).asInstanceOf[Long] log.info(lastClick.toString) //repeat process with other events (open, unsub, etc) } }  ### Portland Pattern Repository #### First Wiki (by li587-82.members.linode.com 32 hours ago) ### StackOverflow #### mavericks intellij scala not configuring properly I followed this link http://austindw.com/blog/programming/running-intellij-jdk-1-7-scala-2-10-mac-os-x-10-9-mavericks to configure scala in intellij but it says docs are not found. any ideas why? ### /r/scala #### New progfun (Functional Programming Principles in Scala) sessions announced, starting Sep 15th! ### QuantOverflow #### Probability of stock closing over a certain price A stock has beta of 2.0 and stock specific daily volatility of 0.02. Suppose that yesterday's closing price was 100 and today the market goes up by 1%. What's the probability of today's closing price being at least 103? ### StackOverflow #### Playframework unusual CPU load Recently we started using PlayFramework and seeing some unusual activity in CPU load. Machine details and other configurations: 32G Machine 12 Cores PlayFramework 2.2.0 java -Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:ReservedCodeCacheSize=128m java applications are running within a docker container(Docker version 0.8.0).  There are 6 play server running behind nginx PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31752 root 20 0 7876m 1.2g 14m S 716 3.8 150:55.28 java 26282 root 20 0 7862m 1.2g 14m S 48 3.8 310:51.65 java 56449 root 20 0 7789m 389m 13m S 2 1.2 0:33.10 java 40006 root 20 0 7863m 1.2g 14m S 2 3.8 17:56.41 java 42896 root 20 0 7830m 1.2g 14m S 1 3.8 15:10.30 java 52119 root 20 0 7792m 1.2g 14m S 1 3.7 8:48.38 java  The request rate is at max 100Req/s. Has anyone faced this similar issues before? Please let me know. #### Detecting the index in a string that is not printable character with Scala I have a method that detects the index in a string that is not printable as follows. def isPrintable(v:Char) = v >= 0x20 && v <= 0x7E val ba = List[Byte](33,33,0,0,0) ba.zipWithIndex.filter { v => !isPrintable(v._1.toChar) } map {v => v._2} > res115: List[Int] = List(2, 3, 4)  The first element of the result list is the index, but I wonder if there is a simpler way to do this. ### Fred Wilson #### Fifty Three Another year, another birthday. For the past fifteen years, I’ve been spending my birthday on the beach with my family. That seems like the ideal way to do it. I hope that tradition lasts as long as I do. The weather has been spectacular on the east end of long island this week and we spent most of yesterday afternoon on a boat in Sag Harbor. Today, I plan to do some yoga, play some golf with my son, and have a family dinner tonight. I don’t really enjoy receiving presents. The best present for me is to be somewhere awesome surrounded by my family. I’ve already received that present. But if you feel that you must send me something, please make a small donation to CSNYC here. I would appreciate that very much. ### Undeadly #### Google offers 5 EuroBSDCon 2014 travel grants for female computer scientists Via the EuroBSDCon 2014 organizers comes the news that Google will be sponsoring 5 female computer scientists to attend the EuroBSDCon 2014 conference. The announcement follows: Google EMEA Women in Tech Conference and Travel grants for female computer scientists As part of Google's ongoing commitment to encourage women to excel in computing and technology, Google is pleased to offer Women in Tech Travel and Conference Grants to attend the EuroBSDcon 2014 conference. 5 grants, are offered which include: • Free registration for the conference • Up to 1000 EUR towards travel costs (to be paid after the conference) Read more... ### /r/dependent_types #### Type-Directed Elaboration of Quasiquotations: A High-Level Syntax for Low-Level Reflection (pdf) ### StackOverflow #### Play framework- design suggestion for validation I need to validate if a certain newly added entity has already been added. I believe the standard way to do it is to add constraints. What I'm looking for, though, is to also tell the user what possible matches exist. Usually this would be a globalError, but given that globalError is the the first argument of verifying which is usually constant text, it wouldn't serve my purpose buecase I would like it to be a custom value computed from the return value of the validator. Should I do what I want to do by not adding the validator and by allowing the binding to succeed and in the Right part of the returned Either? I can first create the new element and then upon finding that potential similarities exist redirect the user? I feel like this shouldn't be the controller's job, and this is too much of design logic inside controller, this should technically all lie inside the model itself. And the model should only tell the controller that something is wrong and the 'type of wrongness' and the controller can take appropriate action. It shouldn't have to contain code to determine if there is anything wrong. Can someone please suggest what should be a good way to do it? ### QuantOverflow #### ADR vs Foriegn Stock Price Arbitraguers So I am sure you all know about the whole Argentina default that has been in the papers lately, no need to delve into it. This so called "technical" default has lead some interesting investment opportunities (Soros Doubles YPF Stake) and a conundrum that I cannot answer. The Telecommuncation company,, Telecom Argentina, has ADR's listed on the NYSE. They are also locally traded on the Buenos Aires Stock Exchange. The ADR price, denominated in$USD, is currently ~19 USD. The locally traded stock is worth ~45 ARS (~5 USD) last I checked on Bloomberg at work today, creating a 14 USD discrepancy.

There might some voting right differences but the share are pretty much at parity and certainly not enough to explain a 14 USD difference. Many of you might argue that spread is due to currency but that is not the case, the ARS is pegged to the USD at ~8.20 ARS for each USD. Country risk premium?? maybe so but wouldn't that be priced in mostly the currency/bonds which then would be reflected in the stock? To give you an idea, the stock trades at 12X Earnings on the B.A.S. Exchange but at 8X here on th NYSE, something is not adding up.

Aren't there quant shops that do cross country arbitrage between ADR's and their respective domiciled stock. And why aren't they arbitraging this spread? There is clearly something wrong.

### StackOverflow

#### Put Clojure Ring middlewares in correct order

I'm having trouble with my Clojure server middlewares. My app has the following requirements:

• Some routes should be accessible with no problems. Others require basic authentication, so I'd like to have an authentication function that sits in front of all the handler functions and makes sure the request is verified. I've been using the ring-basic-authentication handler for this, especially the instructions on how to separate your public and private routes.

• However, I'd also like the params sent in the Authorization: header to be available in the route controller. For this I've been using Compojure's site function in compojure.handler, which puts variables in the :params dictionary of the request (see for example Missing form parameters in Compojure POST request)

However I can't seem to get both 401 authorization and the parameters to work at the same time. If I try this:

; this is a stripped down sample case:

(defn authenticated?
"authenticate the request"
[service-name token]
(:valid (model/valid-service-and-token service-name token)))

(defroutes token-routes

(defroutes public-routes
controller/routes
; match anything in the static dir at resources/public
(route/resources "/"))

(defroutes authviasms-handler
public-routes
(auth/wrap-basic-authentication
controller/token-routes authenticated?))

;handler is compojure.handler
(def application (handler/site authviasms-handler))

(defn start [port]
(ring/run-jetty (var application) {:port (or port 8000) :join? false}))


the authorization variables are accessible in the authenticated? function, but not in the routes.

Obviously, this isn't a very general example, but I feel like I'm really spinning my wheels and just making changes at random to the middleware order and hoping things work. I'd appreciate some help both for my specific example, and learning more about how to wrap middlewares to make things execute correctly.

Thanks, Kevin

#### Installation of cider-nrepl

I've installed CIDER 0.7.0 and now when I start it inside of Emacs (via M-x cider-jack-in RET), I get the following warning:

WARNING: CIDER's version (0.7.0) does not match cider-nrepl's version (not installed)

I've downloaded cider-nrepl and found out that it consists of closure code, not emacs lisp code. Since I've started exploring Clojure world just today, and there is no installation instructions on the project page, could you tell me how can I install cider-nrepl?

### CompsciOverflow

#### What is regular about regular languages? [duplicate]

I am new to automata theory. I am well aware of the definition of regular language in automata, that is "a language is called a regular language if some finite automaton recognizes/accepts it" [MS]. However, I'm confused why such language (set of strings) is called regular.

1. what is regular about regular language?

2. Is there any relation between regular set in automata theory and regular set in mathematics?

### TheoryOverflow

#### exact cover set problem

i am searching an heuristic algorithm for a weighted exact cover problem shown here: http://en.wikipedia.org/wiki/Exact_cover

On my research i only found algorithm which calculates all solutions without any cost function, like http://en.wikipedia.org/wiki/Knuth%27s_Algorithm_X

Do you know some algorithm for that?

My problem has a Universe of size 200 and about 500 000 subsets, so it is not possible to calculate all solutions.

--------------EDIT--------------

Example:

Universe = { 1, 2 ,3 ,4 ,5 ,6 ,7}

Sets=[

{1,2,3,4}, Cost 10

{5,6,7}, Cost 20

{5}, Cost 5

{6,7}, Cost 10 ]

On this example i have to possible solutions {1,2,3,4} and {5,6,7} with cost 30 and the second solution is {1,2,3,4}, {5} and {6,7} with cost 25.

### StackOverflow

#### Creating custom ScalaFX controls

What exactly is the right way to create a custom ScalaFX control? I'm coming from Swing and Scala Swing, where custom components are simply created by extending Component or Panel. But when I try to extend ScalaFX's Control, I can't extend it without a JavaFX Control delegate. Should I just create custom ScalaFX components by extending the base JavFX classes instead of the ScalaFX classes?

#### Return second string if first is empty?

Here is an idiom I find myself writing.

def chooseName(nameFinder: NameFinder) = {
if(nameFinder.getReliableName.isEmpty) nameFinder.getReliableName
else nameFinder.secondBestChoice
}


In order to avoid calling getReliableName() twice on nameFinder, I add code that makes my method look less elegant.

def chooseName(nameFinder: NameFinder) = {
val reliableName = nameFinder.getReliableName()
val secondBestChoice = nameFinder.getSecondBestChoice()
if(reliableName.isEmpty) reliableName
else secondBestChoice
}


This feels dirty because I am creating an unnecessary amount of state using the vals for no reason other than to prevent a duplicate method call. Scala has taught me that whenever I feel dirty there is almost always a better way.

Is there a more elegant way to write this?

Here's two Strings, return whichever isn't empty while favoring the first


#### Compile error when adding List as parameter

This code :

package neuralnetwork

object hopfield {
println("Welcome to the Scala worksheet")

object Neuron {
def apply() = new Neuron(0, 0, false, Nil, "")
def apply(l : List[Neuron]) = new Neuron(0, 0, false, l, "")
}

case class Neuron(w: Double, tH: Double, var fired: Boolean, in: List[Neuron], id: String)

val n2 = Neuron
val n3 = Neuron
val n4 = Neuron
val l = List(n2,n3,n4)
val n1 = Neuron(l)

}


causes compile error :

type mismatch; found : List[neuralnetwork.hopfield.Neuron.type] required: List[neuralnetwork.hopfield.Neuron]


at line : val n1 = Neuron(l)

Why should this occur ? What is incorrect about implementation that is preventing the List being added ?

### /r/compsci

#### Linux Virus

With the increasing influence of linux as one of the most widely used operating systems supporting infrastructure and economy, how is it that Linux developers have solved the problems of malware or virus attacks so elegantly.

submitted by DKBOS

### StackOverflow

#### Is PartialFunction orElse looser on its type bounds than it should be?

Let's define a PartialFunction[String, String] and a PartialFunction[Any, String]

Now, given the definition of orElse

def orElse[A1 <: A, B1 >: B](that: PartialFunction[A1, B1]): PartialFunction[A1, B1]


I would expect not to be able to compose the the two, since

AString
A1Any

and therefore the bound A1 <: A (i.e. Any <: String) doesn't hold.

Unexpectedly, I can compose them and obtain a PartialFunction[String, String] defined on the whole String domain. Here's an example:

val a: PartialFunction[String, String] = { case "someString" => "some other string" }
// a: PartialFunction[String,String] = <function1>

val b: PartialFunction[Any, String] = { case _ => "default" }
// b: PartialFunction[Any,String] = <function1>

val c = a orElse b
// c: PartialFunction[String,String] = <function1>

c("someString")
// res4: String = some other string

c("foo")
// res5: String = default

c(42)
// error: type mismatch;
//   found   : Int(42)
//   required: String


Moreover, if I explicitly provide the orElse type parameters

a orElse[Any, String] b
// error: type arguments [Any,String] do not conform to method orElse's type parameter bounds [A1 <: String,B1 >: String]


the compiler finally shows some sense.

Is there any type system sorcery I'm missing that causes b to be a valid argument for orElse? In other words, how come that A1 is inferred as String?

If the compiler infers A1 from b then it must be Any, so where else does the inference chain that leads to String start?

# Update

After playing with the REPL I noticed that orElse returns an intersection type A with A1 when the types don't match. Example:

val a: PartialFunction[String, String] = { case "someString" => "some other string" }
// a: PartialFunction[String,String] = <function1>

val b: PartialFunction[Int, Int] = { case 42 => 32 }
// b: PartialFunction[Int,Int] = <function1>

a orElse b
// res0: PartialFunction[String with Int, Any] = <function1>


Since (String with Int) <:< String this works, even though the resulting function is practically unusable. I also suspect that String with Any is unified into Any, given that

import reflect.runtime.universe._
// import reflect.runtime.universe._

typeOf[String] <:< typeOf[String with Any]
// res1: Boolean = true

typeOf[String with Any] <:< typeOf[String]
// res2: Boolean = true


So that's why mixing String and Any results into String.

That being said, what is going on under the hood? Under which logic are the mismatching types unified?

# Update 2

I've reduced the issue to a more general form:

class Foo[-A] {
def foo[B <: A](f: Foo[B]): Foo[B] = f
}

val a = new Foo[Any]
val b = new Foo[String]

a.foo(b) // Foo[String] Ok, String <:< Any
b.foo(a) // Foo[String] Shouldn't compile! Any <:!< String
b.foo[Any](a) // error: type arguments [Any] do not conform to method foo's type parameter bounds [A <: String]


### StackOverflow

#### How resolve sbt dependencies in background in Intellij Idea when open new project?

When I open sbt project with Intellij Idea it stars a long running dependency resolving process that blocks UI.

Is there a way to just open a project and "put" dependency resolving process into background (like when I open maven project)?

#### "Flattening" a List in Scala & Haskell

Given a List[Option[Int]]:

scala> list
res8: List[Option[Int]] = List(Some(1), Some(2), None)


I can get List(1,2), i.e. extract the list via flatMap and flatten:

scala> list.flatten
res9: List[Int] = List(1, 2)

scala> list.flatMap(x => x)
res10: List[Int] = List(1, 2)


Given the following [Maybe Int] in Haskell, how can I perform the above operation?

I tried the following unsuccessfully:

import Control.Monad

maybeToList :: Maybe a -> [b]
maybeToList Just x  = [x]
maybeToList Nothing = []

flatten' :: [Maybe a] -> [a]
flatten' xs = xs >>= (\y -> y >>= maybeToList)


#### Getting Started with Playframework 2.0 and Selenium

I am using Play framework 2.0. I would like to write some browser-based acceptance test using Selenium, but I have never used Selenium before must less integrated it with Play or Scala.

What is a basic setup that I can copy and work from?

### TheoryOverflow

#### Highway dimension

I'm interested in understanding some recent theoretical results on pathfinding. Specifically this paper:

http://research.microsoft.com/apps/pubs/default.aspx?id=201061

I understand from the paper that in certain types of graphs (road networks for example) a small set of nodes are sufficient to cover a large number of shortest paths. I also understand that the size of the hitting set that forms such a cover can differ from node to node but is upper bounded by some integer h which is called the highway dimension of the graph.

Two questions:

i) The concept of highway dimension is defined in this paper (Defi 3.4, page 5) using a distance parameter r and a hitting set H which exists for every vertex v in the graph. Specifically, the authors say (slightly paraphrasing) "for all r > 0 and all v there exists a H whose size is bounded by h (the highway dimension of the graph)"

I don't know how to interpret this: is the highway-dimension h constant for all tuples (r, v) or does the value depend on the choice of r? I tend toward the former interpretation but the paper seems ambiguous on this point.

ii) The definition also makes reference to paths P of length > r which can be reached from the vertex v with distance no more than 2r. I don't understand the significance of the (r, 2r) construction. Would it not be simpler to construct a hitting set that covers all paths which begin at v and have length > r? What is gained by this more complicated definition?

#### Is is decidable to check whether two PDA's accept same language or not? [on hold]

Whether two PDA's accept the same language or not. This problem is decidable or undecidable.?

### StackOverflow

#### set parameter in play framework action within template

I have a single input form in a play scala template:

@helper.form(action=routes.Application.searchResult()) {
<input type="text" name="userQuery" value="@userQuery">
}


and would like to pass an extra parameter, '@channel' to the searchResult action, which it takes as an optional argument.

@channel is passed as an argument to the current template. What's the simplest way to do this?

I tried replacing

routes.Application.searchResult()


with

routes.Application.searchResult(channel=channel)


with no success

### TheoryOverflow

#### If problem is np-complete then what are its subproblems, p or np? [on hold]

let A is an NP complete problem. and A is reduced to B, then what is B problem..? Is it np class problem or p class problem or both?

### StackOverflow

#### Scala: how to merge a collection of Maps

I have a List of Map[String, Double], and I'd like to merge their contents into a single Map[String, Double]. How should I do this in an idiomatic way? I imagine that I should be able to do this with a fold. Something like:

val newMap = Map[String, Double]() /: listOfMaps { (accumulator, m) => ... }


Furthermore, I'd like to handle key collisions in a generic way. That is, if I add a key to the map that already exists, I should be able to specify a function that returns a Double (in this case) and takes the existing value for that key, plus the value I'm trying to add. If the key does not yet exist in the map, then just add it and its value unaltered.

In my specific case I'd like to build a single Map[String, Double] such that if the map already contains a key, then the Double will be added to the existing map value.

I'm working with mutable maps in my specific code, but I'm interested in more generic solutions, if possible.

### TheoryOverflow

#### Invert a number modulo a composite number

Supposing $M$ is a composite number and supposing $a$ is an integer such that $a^{-1}\mod M$ exists, can we compute $a^{-1} \mod M$ in $O(\log^{b}(M))$ arithmetic computations where $b>0$ and is some fixed number. Let $a$ and $M$ be of similar sizes.

### StackOverflow

#### ReactiveMongo & Play: How to compare two DateTime instances

I use Play-ReactiveMongo to interact with MongoDB... and I'm wondering how to compare two dates considering that I don't use BSON in my application. Let me provide you with an example:

def isTokenExpired(tokenId: String): Future[Boolean] = {

var query = collection.genericQueryBuilder.query(
Json.obj(
"_id" -> Json.obj("$oid" -> tokenId), "expirationTime" -> Json.obj("$lte" -> DateTime.now(DateTimeZone.UTC))
)
).options(QueryOpts(skipN = 0))

query.cursor[JsValue].collect[Vector](1).map {
case Some(_) => true
case _ => false
}
}


isTokenExpired does not work as expected since expirationTime is considered a String – I've an implicit Writes that serializes a DateTime as "yyyy-MM-ddTHH:mm:ss.SSSZ"... and this is correct since I want a human-readable JSON.

That said, how do I get a document from a collection that has a DateTime less than another DateTime? The following doesn't seem to work:

Json.obj(
"_id" -> Json.obj("$oid" -> tokenId), "expirationTime" -> Json.obj("$lte" -> Json.obj("$date" -> DateTime.now(DateTimeZone.UTC).getMillis)) )  Thanks. #### Unexpected Difficulties with "Hello, World!" I would like to learn Clojure and I've downloaded and set up the following gizmos: • Clojure 1.6.0 from official site; • Leiningen 2.4.3; • Cider 0.6.0 from GitHub. I've got it working. Now I'm trying to print message "Hello, World!", while running Cider from within Emacs: ; CIDER 0.6.0 (Java 1.7.0_65, Clojure 1.6.0, nREPL 0.2.0-beta5) user> (println "Hello World!") Hello World!NoSuchMethodError clojure.tools.nrepl.StdOutBuffer.length()I clojure.tools.nrepl.middleware.session/session-out/fn--7630 (session.clj:43)NoSuchMethodError clojure.tools.nrepl.StdOutBuffer.length()I clojure.tools.nrepl.middleware.session/session-out/fn--7630 (session.clj:43) user>  What is this noise all about? When I just run: $ clojure
;Clojure 1.6.0
user=> (println "Hello, World!")
Hello, World!
nil


everything is fine. When I do it with Leiningen:

$lein repl ; lotsa stuff here... user=> (println "Hello, World!")  After entering of this command I relish the following poetry: CompilerException java.lang.RuntimeException: Unable to resolve symbol: rprintln in this context, compiling:(NO_SOURCE_PATH:1:1) NoSuchMethodError clojure.tools.nrepl.StdOutBuffer.length()I clojure.tools.nrepl.middleware.session/session-out/fn--7630 (session.clj:43) Exception in thread "nREPL-worker-3" java.lang.NoSuchMethodError: clojure.tools.nrepl.StdOutBuffer.length()I at clojure.tools.nrepl.middleware.session$session_out$fn__7630.doInvoke(session.clj:43) at clojure.lang.RestFn.invoke(RestFn.java:460) at clojure.tools.nrepl.middleware.session.proxy$java.io.Writer$ff19274a.write(Unknown Source) at java.io.PrintWriter.write(PrintWriter.java:456) at java.io.PrintWriter.write(PrintWriter.java:473) at clojure.core$fn__5471.invoke(core_print.clj:191)
at clojure.lang.MultiFn.invoke(MultiFn.java:231)
at clojure.core$pr_on.invoke(core.clj:3392) at clojure.core$pr.invoke(core.clj:3404)
at clojure.lang.AFn.applyToHelper(AFn.java:154)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:624) at clojure.core$prn.doInvoke(core.clj:3437)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:624) at clojure.core$println.doInvoke(core.clj:3457)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.main$repl_caught.invoke(main.clj:158) at clojure.tools.nrepl.middleware.interruptible_eval$evaluate$fn__7569$fn__7582.invoke(interruptible_eval.clj:76)
at clojure.main$repl$fn__6634.invoke(main.clj:259)
at clojure.main$repl.doInvoke(main.clj:257) at clojure.lang.RestFn.invoke(RestFn.java:1096) at clojure.tools.nrepl.middleware.interruptible_eval$evaluate$fn__7569.invoke(interruptible_eval.clj:56) at clojure.lang.AFn.applyToHelper(AFn.java:152) at clojure.lang.AFn.applyTo(AFn.java:144) at clojure.core$apply.invoke(core.clj:624)
at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1862) at clojure.lang.RestFn.invoke(RestFn.java:425) at clojure.tools.nrepl.middleware.interruptible_eval$evaluate.invoke(interruptible_eval.clj:41)
at clojure.tools.nrepl.middleware.interruptible_eval$interruptible_eval$fn__7610$fn__7613.invoke(interruptible_eval.clj:171) at clojure.core$comp$fn__4192.invoke(core.clj:2402) at clojure.tools.nrepl.middleware.interruptible_eval$run_next$fn__7603.invoke(interruptible_eval.clj:138) at clojure.lang.AFn.run(AFn.java:22) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)


Oh, Nooo. What a pain! Stop it, stop it!

I'm so perplexed. What is it and how do I fix this? Has anyone similar experience? How do I print "Hello, World!" in Clojure?

#### Changing sbt project's directory layout

According to sbt tutorial on changing paths I'm trying to change "target" output directory to "someother"

override def outputDirectoryName = "someother"


Everything goes fine except one: sbt automatically creates target directory with ".history" file inside. Why sbt does this when it supposed do create only "someother" dir ? I tryied to override all methods that are inherited from BasicProjectPaths (I use sbt.DefaultProject as superclass of my project descriptor)

override def mainCompilePath = ...
override def testCompilePath = ...
...


But sbt creates "target" folder in spite of paths overriding.

### QuantOverflow

#### Valuation of barrier options in Jump diffusion model

I am trying to evaluate the value of a Barrier option using Monte carlo method. The stock follows a jump diffusion model. I am using the method described in Metwally and Atiya. The authors describe the steps so writing the algorithm in matlab say, should be easy. I have implemented the the first algorithm in matlab, described in this paper but my results are not the same as those of the authors. For example, my code below gives 5.1 but according to the authors results it should be 9.013.

The other problem I have is that the probability $P_i$ is negative or more than 1 sometimes during simulation. Could the formula in the paper be wrong?. How can it be coded to avoid this. I have used it as it is in the paper.

clc
clear all
t = cputime;

%%%%%%%%%%%%%%%%%%% Parameters %%%%%%%%%%%%%%%%
S0 = 100.0;
X = 110.0;
H = 85.0;
R = 1.0;
r = 0.05;
sigma = 0.25;
T = 1.0;

%%%%%%%%%%%%%%%% Jump Parameters %%%%%%%%%%%%%%
lam = 2.0;
muA = 0.0;
sigmaA = 0.1;

%%%%%%%%%%%%%%% calculated parameters %%%%%%%%%%
k = exp(muA+0.5*sigmaA*sigmaA)-1;
c = r-0.5*sigma^2-lam*k;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
N = 1e5; % Monte carlo runs
DP = zeros(N,1);
for n = 1:N
I = 1;
jumpTimes = 0:exprnd(lam):T; %interjump times Exp(lam)
K = size(jumpTimes,2)-1;
jumpTimes(end+1) = T;
x = log(S0);
for i = 1:K+1
tau = jumpTimes(i+1)-jumpTimes(i);
xbefore = x + c*tau + sigma*sqrt(tau)*randn();

p = 1.0-exp(-2.0*(log(H)-x)*(log(H)-xbefore)/(tau*sigma^2));
p = p*(xbefore > log(H));
b = (jumpTimes(i+1)-jumpTimes(i))/(1.0-p);
s = jumpTimes(i)+b*rand();

if s <= jumpTimes(i+1) && s >= jumpTimes(i)
gamma = exp(-(x-xbefore+c*tau)^2/(2*sigma^2*tau))/(sigma*sqrt(2*pi*tau));
g = (x-log(H))/(2*gamma*pi*sigma^2)*(s-jumpTimes(i))^(-1.5)*(jumpTimes(i+1)-s)^(-0.5)*...
exp(-((xbefore-log(H)-c*(jumpTimes(i+1)-s))^2/(2*(jumpTimes(i+1)-s)*sigma^2)+...
(x-log(H)+c*(s-jumpTimes(i)))^2/(2*(s-jumpTimes(i))*sigma^2)));
DP(n)= R*b*g*exp(-r*s);
I = 0;
break
end
A = muA + sigmaA*randn();
xafter = xbefore + A;
if xafter <= log(H)
DP(n) = R*exp(-r*jumpTimes(i+1));
I = 0;
break
end
x = xafter;
end
if I==1 % no crossing happened
xT = log(S0)+(c+muA*lam)*T+sqrt((sigma^2+(muA^2+sigmaA^2)*lam)*T)*randn();
DP(n) = exp(-r*T)*max(exp(xT) - X, 0.0);
end

end

DOC = mean(DP)
e = (cputime-t)/60;


### StackOverflow

#### trying to understand zeromq high water mark behaviour

I have been playing around with pyzmq and simple load balancing using HWM, and I don't quite understand the behaviour I am seeing.

I have set up a simple multithreading test, with a DEALER client connected to two workers via a ROUTER to DEALER pattern. HWM is set to 1. One of the worker is very fast and the other is very slow, and all the client does is spam 100 messages to the server. This generally seems to work and the faster worker processes many more messages than the slow worker.

However, even if I set the slow worker to be so slow, such that the fast worker should be able to process 99 messages before the slow worker finished even one, the slow worker still seems to receive at least 2 or 3 messages.

Are high water mark behaviour inexact or am I missing something?

The server code is as follows:

import re, sys, time, string, zmq, threading, signal

def worker_routine(worker_url, worker_id, context=None):
# socket to talk to dispatcher
context = context or zmq.Context.instance()
socket = context.socket(zmq.REP)
socket.set_hwm(1)
socket.connect(worker_url)

print "worker ", worker_id, " ready ..."
while True:
x = socket.recv()

if worker_id==1:
time.sleep(3)

print worker_id, x
sys.stdout.flush()

socket.send(b'world')

context = zmq.Context().instance()
# socket facing clients
frontend = context.socket(zmq.ROUTER)
frontend.bind("tcp://*:5559")
# socket facing services
backend  = context.socket(zmq.DEALER)
url_worker = "inproc://workers"
backend.set_hwm(1)
backend.bind(url_worker)

# launch pool of worker threads
for i in range(2):
time.sleep(0.1)

try:
zmq.device(zmq.QUEUE, frontend, backend)
except:
print "terminating!"

# we never get here
frontend.close()
backend.close()
context.term()


The client code is as follows:

import zmq, random, string, time, threading, signal

#  prepare our context and sockets
context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.connect("tcp://localhost:5559")

inputs = [''.join(random.choice(string.ascii_lowercase) for x in range(12)) for y in range(100)]

for x in xrange(100):
socket.send_multipart([b'', str(x)])

print "finished!"


Example output:

...
0 81
0 82
0 83
0 84
0 85
0 86
0 87
0 88
0 89
0 90
0 91
0 92
0 93
0 94
0 95
0 96
0 97
0 98
0 99
1 1
1 3
1 5


#### Can I have multiple selftypes in Scala?

Can I have a class that can have two different self types in Scala? Or emulate it in some way?

object Hi {
trait One {
val num = 1
}
trait Two {
val num = 2
}
class Test {
this: One => {
println(num)
}
this: Two => {
println(num)
}
}
}

import Hi._
new Test with One
new Test with Two


#### Setting a default param as the value of another param

In Scala, I can set default params:

case class Foo(a:String, b:String = "hey")


What I would like to do is something like this:

case class Foo(a:String, b:String = a)


But that would result in an error:

not found: value a


This would be very useful in cases like these:

case class User(createdAt:DateTime = DateTime.now, updatedAt:DateTime = createdAt)

case class User(id:Long, profileName:String = "user-" + id.toString)


### CompsciOverflow

#### What are the advantages and disadvantages of a larger word size?

I'm doing a sample exam paper and there is this question:

Thinking of advantages, the only one that comes to my mind is that we can store larger values (more data) in one place (variable). What are the others? What are the disadvantages?

Research I have done:

What I came up with: see above.

### StackOverflow

#### Get IntelliJ to respect multiple @throws annotations in Scala dependency from a Java project

I have a Java project which depends on a Scala project. Inside that Scala project, there is a particular method that uses two @throws(classOf[<some exception>]):

  @throws(classOf[ExtensionException])
@throws(classOf[LogoException])
def perform(args: Array[Argument], context: Context)


However, intellij doesn't seem to know about both when I override the method:

The error is that the base method does not throw ExtensionException. The code compiles fine. Note that LogoException appears to be okay when I delete ExtensionException from throws declaration.

So, is there a way I can get Intellij to respect both throws declarations? Or is this a bug?

### /r/compilers

#### Name for language?

Hello everyone! I am huge on compiler and language design, and have designed my own little c-like language, and am currently working on the compiler (which will first compile to c (or c++) and eventually to ELF, COM, or MACH-o (depending on your system). I am having trouble coming up with a name though, and was wondering if someone here could help? Right now the only name I have is SLang (suggested by a friend, short for S Language, with the S being from DTSCode). Does anyone have any good ideas they would like to let me use?

submitted by DTSCode

### /r/compsci

What books can you recommend for a beginner to learn about regular language, context free grammars, formal grammars, etc? Something along the writing style of books like "Learn You a Haskell for Great Good", which doesn't assume much prior knowledge and is very reader friendly.

submitted by HifiBoombox

### CompsciOverflow

#### Complexity of Linear Diophantine equations

My question is simply, can linear Diophantine equations be solved in polynomial time? Specifically, I am looking at equations of the form $a_1 x_1+a_2 x_2 + ... + a_n x_n = k$, where $a_i,x_i,k$ are all integers, and solving for $x_i$. The algorithm I am using is based off of the following journal article:

The algorithm I derived from the article is roughly as follow:

1. Find a minimum coefficient (there could be many, just pick the any of them).
2. Take every coefficient that is not the one you picked in the modulus of coefficient you picked.
3. Check if any coefficients are 1, if so, go to 5.

4. Go to 1.

5. Back-substitute to find solution.

Currently, I have an incomplete argument that the algorithm is indeed polynomial time. My argument is as follows:

Suppose we choose the $a_j$ that such that it minimizes the number of bits all of the coefficients are reduced by, thereby maximizing the number of iterations. Then, after we compute ${a^{'}_{{i} \neq {j}} = a_{{i} \neq {j}}} \mod {a_j}$ we know that $a^{'}_{{i} \neq {j}} < {{a_{{i} \neq {j}}} \over {2}}$, because if it were not, then $a_j$ could go into $a_{{i} \neq {j}}$ another time. On the next iteration, the previous minimum $a_j$ cannot be the new minimum, since every other coefficient is less. Thus, the previous minimum will necessarily be reduced by one bit. Hence, all of the coefficients are reduced by at least 1 bit every 2 iterations. Thus, the maximum number of iterations is twice the maximum number of bits for a coefficient $2\log_2(\max {a_i})$, the back-substitution takes exactly the same number of steps, thus I estimate the algorithm is $O(\log(\max {a_i}))$, which is polynomial time with respect to the number of bits in the coefficient representation.

Am I missing something or is this correct? Please let me know if there is anything I need to clarify.

### TheoryOverflow

#### How can one find the "hard" probability distribution on the input for recursive boolean functions?

Update: Since, it seems there is no progress regarding this question, any idea, conjecture, hunch, or advice is welcome. For example, are there any partial or incomplete results? What are the main challenges? Which techniques may be amenable to make any progress? Any observation (irrespective of whether it is insightful or not; trivial or not) is also welcome.

## Background:

Decision tree complexity or query complexity is a simple model of computation defined as follows. Let $f:\{0,1\}^n\to \{0,1\}$ be a Boolean function. The deterministic query complexity of $f$, denoted $D(f)$, is the minimum number of bits of the input $x\in\{0,1\}^n$ that need to be read (in the worse case) by a deterministic algorithm that computes $f(x)$. Note that the measure of complexity is the number of bits of the input that are read; all other computation is free.

We define the zero error or Las Vegas randomized query complexity of $f$, denoted $R_0(f)$, as the minimum number of input bits that need to be read in expectation by a zero-error randomized algorithm that computes $f(x)$. A zero-error algorithm always outputs the correct answer, but the number of input bits read by it depends on the internal randomness of the algorithm. (This is why we measure the expected number of input bits read.)

We define the bounded error or Monte Carlo randomized query complexity of $f$, denoted $R_2(f)$, to be the minimum number of input bits that need to be read by a bounded-error randomized algorithm that computes $f(x)$. A bounded-error algorithm always outputs an answer at the end, but it only needs to be correct with probability $\geq$ $1 - \delta$ ($2/3$, say).

## Work on Recursive Boolean Functions:

There has been a line of work on the decision tree complexity of recursive boolean functions as mentioned below. The techniques focus on applying Yao's Lemma and using the distributional perspective guaranteed by it. This means we define a probability distribution on the inputs and the cost incurred by the best algorithm for this distribution gives a lower bound on the randomized decision tree complexity of the function. The worst possible distribution will give the actual randomized decision tree complexity.

The techniques in these works focus on giving a lower bound on the cost incurred by reading the "minority" bits (or vertices in the function tree) of the input via some form of induction. Another direction of attack could be to find the most "hard" distribution.

## Some Notions

We define: The distribution $D^*$ on an input set $I$ is hard for a given function $f$, if $\forall D$ on $I$, $C(A,D) \leq C(A^*, D^*)$, where $C(A,D)$ is the expected cost (i.e. number of input bits read on expectation) incurred by the deterministic decision tree $A$ when the input follows the probability distribution is $D$. where $A^* = \operatorname{argmin}_A C(A, D^*), A = \operatorname{argmin}_A C(A, D)$

A distribution $D_1 < D_2$, if $C_m(D_1) < C_m(D_2)$, where $C_m(D_i) = C(A_i, D_i)$, and $A_i = \operatorname{argmin}_A C(A, D_i)$ . In other words $D_2$ is harder than $D_1$ means the best possible algorithm for $D_2$ does worse than the best possible algorithm for $D_1$. Note: The algorithm must be correct in the whole domain, and not just in the support of the distributions. For the base case of a recursive boolean function like say 2 bits or 4 bits, it is often easy to show a certain distribution to be hard. Often it is an easy observation or an obvious fact. In many cases, it may seem natural that the "hard" distribution is the recursive extension of the hard distribution. However, this may not be true in general, especially if the function is not symmetric over the input bits and rather skewed i.e. not all input bits are equally important to infer the value of the function on certain inputs (or a subset thereof).

## Question:

Is there any work on how to approach the problem of finding the "hard" distribution of the recursive function, from that of the base case function?

Is there any interesting connection of this problem with any other problems? Any comments are welcome.

## References:

[1] M. Saks, A. Wigderson Probabilistic Boolean Decision Trees and the Complexity of Evaluating Game Trees Proceedings of the 27th Foundations of Computer Science, pp. 29-38, October 1986.

[2] M. Santha. On the Monte Carlo boolean decision tree complexity of read-once formulae. Random Structures and Algorithms, 6(1):75{87, 1995.

[3] Frédéric Magniez, Ashwin Nayak, Miklos Santha, and David Xiao. Improved bounds for the randomized decision tree complexity of recursive majority. In Luca Aceto, Monika Henzinger, and Jiri Sgall, editors, ICALP (1), volume 6755 of Lecture Notes in Computer Science, pages 317–329. Springer, 2011.

[4] Nikos Leonardos. An improved lower bound for the randomized decision tree complexity of recursive majority. In Fedor V. Fomin, Rusins Freivalds, Marta Z. Kwiatkowska, and David Peleg, editors, Proceedings of 40th International Colloquium on Automata, Languages and Programming, volume 7965 of Lecture Notes in Computer Science, pages 696{708. Springer, 2013.

### StackOverflow

#### OpenJDK patch update for RHEL 6 server

We need to apply the jdk updates to one of the RHEL 6 servers. How do i apply the patch if i have the RPM package available which I have downloaded from the internet. Searched alot on the Internet but couldn't find steps to apply the jdk patch . Also, what precautions should be taken before applying the new RPM update so that the current functionality is not disturbed?

### CompsciOverflow

#### Probability Game Question [migrated]

I am new to probability. I am trying to solve the following problem.

In a game, probability of winning the game is $w$ & losing the game is $l$ & probability of game continuing is $(1 - w - l)$. What is the probability of winning the game in $m$ steps ?

I know the answer. It is p$(m) = w + (1 - w - l) * p(m - 1).$ Can some one explain why it is.

### StackOverflow

#### Confused about diagrams of Yampa switches

There is some diagrams of Yampa switches at:

(and so on).

I've found that the switch, the only one diagram with description, is the easiest one to get understand. The others seems hard to follow the similar symbols to read the diagrams. For example, to try to read the rSwitch with the symbols used in the switch may be:

Be a recursive SF which is always fed a signal of type 'in' and returns a signal of type 'out'. Start with an initial SF of the same type but someone outside the switch function (the ?[cond]) square) may also pass a new SF via an Event (the type Event (SF in out) at the signature) while the condition is satisfied (for the '?' before the [cond] square). In case of the Event, Yampa would use the new SF instead of the existing one. This process is recursive since '?' (can't get it from the diagram except the signature of the rSwitch seems recursive).

And after I look into the source of rSwitch, it looks like it use switch to switch to the same init SF recursively while the t is fired (according to what described in the diagram, although I don't see what the special t would be fired in the source code).

In the Yampa Arcade it explains the dpSwitch with the code and example. And the paper about the game 'Frag' also uses dpSwitch. However the rSwitch seems absent in these tutorial. So I really don't know how to use r- or the k-serial switches, and in what cases we would need them.

### CompsciOverflow

#### Can/Do multiple processes run simultaneously on a multi-core system?

I understand context switches and threading on a single core system, but I'm trying to understand what happens in a multi-core system. I know multiple threads from the same process can run simultaneously in a multi-core system. However, can multiple processes run simultaneously in such a system as well?

In other words, in a dual core processor: - How many processes can run simultaneously(without context switching) if all processes are single threaded? - How many processes can run simultaneously if there are 2 processes and both are multi-threaded?

### QuantOverflow

#### An optimization problem on Markov Chain

Consider a Markov Chain $\{X_n\}$ whose transition probability depends on some parameter $\theta$ ($p_{ij}(\theta)$). Now I want to optimize the following quantity

$$\lambda(\theta) = \lim_{n->\infty} \frac{1}{n}E[\sum_{m=0}^{n}f_{X_m}(\theta)]$$

where $f_i(\theta)$ is a given function for each state $i$ of the Markov Chain. We don't have the knowledge of the transition probability of the markov chain, only we have a simulator which can generate states according to that probability depending on some parameter $\theta$.

I want to know about what are the methods available in the literature for this problem.

### StackOverflow

#### Setting date range in queries

I would like to add a condition in my query to filter records which has a date falling under current month. How can I achieve this? I tried to use clj-time to retrieve the current month and match it along with the DB field. But that didn't give me any luck.

#### converting a List of Map to a Map in Scala [duplicate]

I've a List of Maps and I want to create a Map.

For example -

Input

List(Map(a -> 1), Map(b -> 2), Map(c -> 3))


Output

Map( a -> 1, b -> 2, c -> 3 )


#### Inconsistency with scala reflection library

I'm having trouble understanding why using scala's runtime reflection in 2.11.1 gives me seemingly inconsistent results.

I am trying to inspect the type of a field contained in a java object, like this one:

import java.util.List;
import java.util.ArrayList;

public class Example {
private List<Integer> listOfInts;

public Example () {
listOfInts = new ArrayList<Integer>();
}
}


Now suppose I have a scala program that tries to reason about the type of the field inside "Example:"

import java.lang.Class
import java.lang.reflect.Field
import java.util.List
import scala.reflect.runtime.{ universe => ru }

object Inspect extends scala.App {
val example = new Example
val cls = example.getClass
val listfield = cls.getDeclaredField("listOfInts")

println(isListType(listfield)) // prints false
println(isListType(listfield)) // prints true, as do all subsequent calls

def isListType (field: Field): Boolean = {
/*
A function that returns whether the type of the field is a list.
Based on examples at http://docs.scala-lang.org/overviews/reflection/environment-universes-mirrors.html
*/
val fieldcls = field.getType

val fieldsym: ru.ClassSymbol = mirror.classSymbol(fieldcls)
val fieldtype: ru.Type = fieldsym.toType

(fieldtype <:< ru.typeOf[List[_]])
}
}


In this particular code snippet, the first call to isListType returns false, and the second returns true. If I switch the type operator from <:< to =:=, the first call returns true, and the second false.

I have a similar function in a larger code body, and have found that even when the function is part of a static object, this behavior occurs. This does not happen when using unparameterized classes. While I intended for the function to be pure, this is obviously not the case. Further experimentation has shown that there is some persistent state held somewhere. If I replace the isListType function with straightline code, I get this:

...
val example = new Example
val cls = example.getClass
val listfield = cls.getDeclaredField("listOfInts")
val fieldcls = listfield.getType

val fieldsym: ru.ClassSymbol = mirror.classSymbol(fieldcls)
val fieldtype: ru.Type = fieldsym.toType

println(fieldtype <:< ru.typeOf[List[_]]) // prints false
println(fieldtype <:< ru.typeOf[List[_]]) // prints false


but if I reassign to field type after the <:< operator, I get this:

// replace as under the fieldsym assignment
var fieldtype: ru.Type = fieldsym.toType
println(fieldtype <:< ru.typeOf[List[_]]) // prints false

fieldtype = fieldsym.toType
println(fieldtype <:< ru.typeOf[List[_]]) // prints true


while reassigning to field type before the <:< operator gives this:

// replace as under the fieldsym assignment
var fieldtype: ru.Type = fieldsym.toType
fieldtype = fieldsym.toType

println(fieldtype <:< ru.typeOf[List[_]]) // prints false
println(fieldtype <:< ru.typeOf[List[_]]) // prints false


Does anyone understand what I'm doing wrong here, or at least have a way around this?

### CompsciOverflow

#### Why do negative array indices make sense?

I have came across a weird experience in C programming. Consider this code:

int main(){
int array1[6] = {0, 1, 2, 3, 4, 5};
int array2[6] = {6, 7, 8, 9, 10, 11};

printf("%d\n", array1[-1]);
return 0;
}


When I compile and run this, I don't get any errors or warnings. As my lecturer said, the array index -1 accesses another variable. I'm still confused, why on earth does a programming language have this capability? I mean, why allow negative array indices?

### StackOverflow

#### Cannot Connect to SQS with Akka-Camel

I'm trying to push messages through my Scala application to an SQS queue. I receive the following error when trying to connect to SQS:

ProducerRegistrar$$anonfunreceive3.applyOrElse(CamelSupervisor.scala:159) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:425) at akka.actor.ActorCell.invoke(ActorCell.scala:386) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:230) at akka.dispatch.Mailbox.run(Mailbox.scala:212) at akka.dispatch.ForkJoinExecutorConfiguratorMailboxExecutionTask.exec(AbstractDispatcher.scala:506) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:262) at scala.concurrent.forkjoin.ForkJoinPoolWorkQueue.runTask(ForkJoinPool.java:975) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1478) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: aws-sqs://analyticsSandboxSQS?accessKey=<access>&secretKey=<secret> due to: No component found with scheme: aws-sqs at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:475) at akka.camel.internal.ProducerRegistrar$$anonfun$receive$3.applyOrElse(CamelSupervisor.scala:151)
... 9 more


I have used the following code to setup the URI:

import akka.actor.{ Actor, ActorSystem, Props }
import akka.camel.{ Oneway, Producer }

class EventSenderSQS extends Actor with Producer with Oneway {
def endpointUri = "aws-sqs://queueName?accessKey=<access>&secretKey=<secret>"
}


And I use the following to try to send a message:

val sys = ActorSystem("sys")
val eventsActor = sys.actorOf(Props[EventProducerSQS])
eventsActor ! "testMessage"


I am using akka-camel version 2.1.4, which should support the aws-sqs endpoint.

### CompsciOverflow

#### Why does bubble sort do $\Theta(n^2)$ comparisons on an $n$ element list?

I have a quick question on the bubble sort algorithm. Why does it perform $\Theta(n^2)$ comparisons on an $n$ element list?

I looked at the Wikipedia page and it does not seem to tell me. I know that because of its magnitude it takes a lot of work with large numbers.

### StackOverflow

#### What is the proper way to return from an exception in Scala?

In a non-functional language, I might do something like:

try {
// some stuff
} catch Exception ex {
return false;
}

// Do more stuff

return true;


In Scala, however, this pattern is clearly not correct. If my scala code looks like this:

try {
// do some stuff
}
catch {
case e: Exception => // I want to get out of here and return false
)
}

// do more stuff

true


How do I properly do that? I don't want to use the "return" statement, of course, but I also don't want to drop through and "do more stuff" and eventually return true.

### arXiv Logic in Computer Science

#### The Logic of Approximate Dependence. (arXiv:1408.4437v1 [math.LO])

We extend the treatment of functional dependence, the basic concept of dependence logic, to include the possibility of dependence with a limited number of exceptions. We call this approximate dependence. The main result of the paper is a Completeness Theorem for approximate dependence atoms. We point out some problematic features of this which suggests that we should consider multi-teams, not just teams.

#### Proceedings of the First International Workshop on FPGAs for Software Programmers (FSP 2014). (arXiv:1408.4423v1 [cs.AR])

This volume contains the papers accepted at the First International Workshop on FPGAs for Software Programmers (FSP 2014), held in Munich, Germany, September 1st, 2014. FSP 2014 was co-located with the International Conference on Field Programmable Logic and Applications (FPL).

#### The Structure of Optimal and Near Optimal Target Sets in Consensus Models. (arXiv:1408.4364v1 [cs.DM])

We consider the problem of identifying a subset of nodes in a network that will enable the fastest spread of information in a decentralized environment.In a model of communication based on a random walk on an undirected graph, the optimal set over all sets of the same or smaller cardinality minimizes the sum of the mean first arrival times to the set by walkers starting at nodes outside the set. The problem originates from the study of the spread of information or consensus in a network and was introduced in this form by V.Borkar et al. in 2010. More generally, the work of A. Clark et al. in 2012 showed that estimating the fastest rate of convergence to consensus of so-called leader follower systems leads to a consideration of the same optimization problem.

The set function $F$ to be minimized is supermodular and therefore the greedy algorithm is commonly used to construct optimal sets or their approximations. In this paper, the problem is reformulated so that the search for solutions is restricted to optimal and near optimal subsets of the graph. We prove sufficient conditions for the existence of a greedoid structure that contains feasible optimal and near optimal sets. It is therefore possible we conjecture, to search for optimal or near optimal sets by local moves in a stepwise manner to obtain near optimal sets that are better approximations than the factor $(1-1/e)$ degree of optimality guaranteed by the use of the greedy algorithm. A simple example illustrates aspects of the method.

#### Mahonian STAT on words. (arXiv:1408.4290v1 [cs.DM])

In 2000, Babson and Steingr\'imsson introduced the notion of what is now known as a permutation vincular pattern, and based on it they re-defined known Mahonian statistics and introduced new ones, proving or conjecturing their Mahonity. These conjectures were proved by Foata and Zeilberger in 2001, and by Foata and Randrianarivony in 2006.

In 2010, Burstein refined some of these results by giving a bijection between permutations with a fixed value for the major index and those with the same value for STAT, where STAT is one of the statistics defined and proved to be Mahonian in the 2000 Babson and Steingr\'imsson's paper. Several other statistics are preserved as well by Burstein's bijection.

At the Formal Power Series and Algebraic Combinatorics Conference (FPSAC) in 2010, Burstein asked whether his bijection has other interesting properties. In this paper, we not only show that Burstein's bijection preserves the Eulerian statistic ides, but also use this fact, along with the bijection itself, to prove Mahonity of the statistic STAT on words we introduce in this paper. The words statistic STAT introduced by us here addresses a natural question on existence of a Mahonian words analogue of STAT on permutations. While proving Mahonity of our STAT on words, we prove a more general joint equidistribution result involving two six-tuples of statistics on (dense) words, where Burstein's bijection plays an important role.

#### Enhancing the Accuracy of Device-free Localization Using Spectral Properties of the RSS. (arXiv:1408.4239v1 [cs.NI])

Received signal strength based device-free localization has attracted considerable attention in the research society over the past years to locate and track people who are not carrying any electronic device. Typically, the person is localized using a spatial model that relates the time domain signal strength measurements to the person's position. Alternatively, one could exploit spectral properties of the received signal strength which reflects the rate at which the wireless propagation medium is being altered, an opportunity that has not been exploited in the related literature. In this paper, the power spectral density of the signal strength measurements are related to the person's position and velocity to augment the particle filter based tracking algorithm with an additional measurement. The system performance is evaluated using simulations and validated using experimental data. Compared to a system relying solely on time domain measurements, the results suggest that the robustness to parameter changes is increased while the tracking accuracy is enhanced by 50% or more when 512 particles are used.

#### A Price Selective Centralized Algorithm for Resource Allocation with Carrier Aggregation in LTE Cellular Networks. (arXiv:1408.4151v1 [cs.NI])

In this paper, we consider a resource allocation with carrier aggregation optimization problem in long term evolution (LTE) cellular networks. In our proposed model, users are running elastic or inelastic traffic. Each user equipment (UE) is assigned an application utility function based on the type of its application. Our objective is to allocate multiple carriers resources optimally among users in their coverage area while giving the user the ability to select one of the carriers to be its primary carrier and the others to be its secondary carriers. The UE's decision is based on the carrier price per unit bandwidth. We present a price selective centralized resource allocation with carrier aggregation algorithm to allocate multiple carriers resources optimally among users while providing a minimum price for the allocated resources. In addition, we analyze the convergence of the algorithm with different carriers rates. Finally, we present simulation results for the performance of the proposed algorithm.

#### Ameliorate Threshold Distributed Energy Efficient Clustering Algorithm for Heterogeneous Wireless Sensor Networks. (arXiv:1408.4112v1 [cs.NI])

Ameliorating the lifetime in heterogeneous wireless sensor network is an important task because the sensor nodes are limited in the resource energy. The best way to improve a WSN lifetime is the clustering based algorithms in which each cluster is managed by a leader called Cluster Head. Each other node must communicate with this CH to send the data sensing. The nearest base station nodes must also send their data to their leaders, this causes a loss of energy. In this paper, we propose a new approach to ameliorate a threshold distributed energy efficient clustering protocol for heterogeneous wireless sensor networks by excluding closest nodes to the base station in the clustering process. We show by simulation in MATLAB that the proposed approach increases obviously the number of the received packet messages and prolongs the lifetime of the network compared to TDEEC protocol.

#### On the expected number of equilibria in a multi-player multi-strategy evolutionary game. (arXiv:1408.3850v1 [math.PR] CROSS LISTED)

In this paper, we analyze the mean number $E(n,d)$ of internal equilibria in a general $d$-player $n$-strategy evolutionary game where the agents' payoffs are normally distributed. First, we give a computationally implementable formula for the general case. Next we characterize the asymptotic behavior of $E(2,d)$, estimating its lower and upper bounds as $d$ increases. Then we provide an exact formula for $E(n,2)$. As a consequence, we show that in both cases the probability to see the maximal possible number of equilibria tends to zero when $d$ or $n$ respectively goes to infinity. Finally, for larger $n$ and $d$, numerical results are provided and discussed.

#### Type Expressiveness and Its Application in Separation of Behavior Programming and Data Management Programming. (arXiv:1107.3193v10 [cs.PL] UPDATED)

A new behavior descriptive entity type called spec is proposed, which combines the traditional interface with test rules and test cases, to completely specify the desired behavior of each method, and to enforce the behavior-wise correctness of all compiled units. Using spec, a new programming paradigm is proposed, which allows the separation programming space into 1) a behavior domain to aggregate all behavior programming in the format of specs, 2) a object domain to bind each concrete spec to its data representation in a particular address space, and 3) a realization domain to connect the behavior domain and the object domain. Such separation guarantees the strictness of behavior satisfaction at compile time, while allows flexibility of dynamical binding of actual implementation at runtime. A new convention call type expressiveness to allow data exchange between different programming languages and between different software environments is also proposed.

### StackOverflow

#### Java/Scala library for algebra, mathematics

Can you advise me some flexible and powerful, yet fast library which could cover SciPy (both in performance and functionality). I found SciPy very expressive - but I want to try something in Scala.

I read a little about Scala - but is not as featured as SciPy. Any alternatives? Maybe Java library?

### CompsciOverflow

#### Why comparison is dominant time consumption for comparison-based sorting algorithms? [duplicate]

Comparison-based sorting algorithms does a number of different operations to accomplish the sorting, why comparisons are the dominant time consumption? While I understand the standard analyses of asymptotical behavior of number of comparison operations, I don't quite understand why other costs of other types of operations are negligible.

### QuantOverflow

#### Is it possible to model general wrong way risk via concentration risk?

General wrong way risk (GWWR) is defined as due to a positive correlation between the level of exposure and the default probability of the counterparty, due to general market factors. (Specific wrong way risk is when they are positively correlated anyway). According to the “Risk Concentration Principles” (bcbs63) “different entities within the conglomerate could be exposed to the same or similar risk factors or to apparently unrelated risk factors that may interact under some unusual stressful circumstances.”

Given that the different market factors tend have a stronger positive correlation when one is talking about the same country/region(mainly the base curves), the same industry (mainly the spreads), etc, should be the concentration risk (per region, industry,..) be used to model the general wrong way risk?

With 5 regions (Americas, UK, Europe(ex UK), Japan, Asia-Pacific(ex Japan) and 10 sectors (Energy, Basic Materials ,Industrails, consumer Cyclical, consumer Non-Cyclical, Health Care, Financials, Information Techniology, Telecomunication Services and Utilities), you should be able to get the GWWR from a sort of variance of the concentration from the average_of_sectors(ideally 10%) and average_of_regions (ideally 20%). When you have 40% of your exposure in Energy, 30% in Financials 20% in Telecomunication services and 10% in whatever else; well diversified. What I mean is, assuming that the rest of the parameters is all the same (same maturities, types of instruments=bonds -to simplify, pricipals, etc), the GWWR should be much larger for 40-10-40-10 than for 30-30-30-10.

Ex1: A Swiss company receives CHF, buys materials in EUR and takes a loan in EUR to pay them. In case the EUR increases with respect to CHF, both the probability of default of the company (raw materials increase in price) and its exposure in CHF increase. As the default is a statistical property, having 40% of your portfolio as loans provided to many of such companies will make you notice the default (which does not any longer behave idiosyncraticly, as when you would have one company). Assume the lender does not structure its business around the EUR/CHF exchange risk.

Ex2: You are a European lender 10 years ago. People buy houses and earn salaries in the local currency and take mortgages in CHF, as CHF had very low/the lowest interest rates. The CHF rises by 1.25, and the exposure rises by 25%. The probability of default rises, as the price of the house/collateral does not rise in the local currency and the monthly rate to pay goes well over the allowed indebtment percentage.If you are providing many of such mortgages, you are exposed to GWWR proportional to their concentration with respect to your portfolio.

My question is if general wrong way risk is not a form of double counting (Should'nt wrong way risk include only the specific wrong way risk?) Could someone,please, give an example of GWWR where concentration is not a factor?

I guess that one can regress credit risk/hazard rates on market factors and look for strong correlations, but this should already be accounted for by the stressed VaR.

### QuantOverflow

#### what is General IB2 Restriction in Basel II credit risk model

I was reading Basel II wiki page, it says:

The first pillar

The first pillar deals with maintenance of regulatory capital calculated for three major components of risk that a bank faces: credit risk, operational risk, and market risk. Other risks are not considered fully quantifiable at this stage.

The credit risk component can be calculated in three different ways of varying degree of sophistication, namely standardized approach, Foundation IRB, Advanced IRB and General IB2 Restriction. IRB stands for "Internal Rating-Based Approach".

Any idea what is such “General IB2 Restrition”? I checked the Basel II: International Convergence of Capital Measurement and Capital Standards: a Revised Framework, Comprehensive Version (BCBS) (June 2006 Revision) but couldn't find any definition.

#### Stress Testing Methods

I'm working on the following task:

Given quarterly data:

1. a time series representing the 1-year realized (10 years of data) rates of default on a portfolio of mortgages
2. a slew of realized (10 years of data) macroeconomic time series. Each time series may or may not be relevant
3. A stressed scenario of those same macroeconomic time series for 2 years

Estimate the probability of default using the stressed data.

I don't actually know anything about underlying distributions. The only data I have for inference are these time series.

My initial approach was something like this: I would first make every time series stationary. Then eliminate macroeconomic variables that were not significantly correlated with my dependent variable. Then use a stepwise method to determine the best variables to use in a linear regression. Then I would include those exogenous variables while fitting an ARIMA model. Along the way I would do several tests (e.g., autocorrelation, multicollinearity, stationarity, etc.). Then use that model for prediction.

Note that I actually have several different "portfolios" which I am fitting. Using my above procedure, some of the stressed scenarios appear unreasonable. So, I began looking for totally different alternatives. Are there any suggestions?

I realize this is an unreasonably broad question. To narrow the scope, I've done some brief research and believe some viable alternatives might include:

• Calibrating some dynamic transition densities using Bayesian inference and MCMC
• Calibrating a conditional Vasicek model that allows of autocorrelation

The problem is, I'm not too familiar with these methods and would want to make efficient use of my time.

Would you suggest I attempt implementing these alternatives? Or some other alternative?

Do you have any advice for implementation in R?

Thank you!

### StackOverflow

#### implicit definition and tail recursion

Probably the most straightforward definition of the list reversal function in a functional language is (using Haskell-like pseudocode)

rev [] = []
rev (x:xs) = (rev xs) ++ [x]


However, every beginning functional programmer is taught that this implementation is inefficient and that one should instead write

rev' [] acc = acc
rev' (x:xs) acc = rev' xs (x:acc)
rev l = rev' l []


A bad thing about the efficient version is that the programmer is forced to introduce an auxiliary function and parameter whose meaning is not very clear. It occurred to me that it might be possible to avoid this if a language permitted implicit definitions roughly like the following:

rev [] = []
(rev (x:xs)) ++ m = (rev xs) ++ (x:m)


These equations fully determine the behavior of rev, so they might be said to constitute an implicit definition of it. They do not have the defect of introducing the auxiliary function rev'. Yet there is a natural way of evaluating the function that will be efficient. For instance, here is a plausible reduction sequence:

rev [1,2,3]
matches second line with x=1, xs=[2,3], m=[]
reduces to (rev [2,3]) ++ [1]
matches second line with x=2, xs=[3], m=[1]
reduces to (rev [3]) ++ [2,1]
matches second line with x=3, xs=[], m=[2,1]
reduces to (rev []) ++ [3,2,1]
reduces ultimately to [3,2,1]


I don't have much of a sense for how widely this kind of thing could be applied, but it does seem to work nicely in this example at least, and it seems to me that it could at least work for some similar cases where one would otherwise have to introduce auxiliary functions for the sake of efficiency. Can anyone point me to any papers that discuss something like this or languages that support something like this? It sort of feels like a logic programming thing to me, but I have very little experience with logic programming.

### Planet Theory

#### Unconstrained Quasi-Submodular Function Optimization

Authors: Jincheng Mei, Kang Zhao, Bao-Liang Lu
Abstract: With the extensive application of submodularity, its generalizations are constantly being proposed. However, most of them are tailored for special problems. In this paper, we focus on quasi-submodularity, a universal generalization, which satisfies weaker properties than submodularity but still enjoys favorable performance in optimization. Similar to the diminishing return property of submodularity, we first define a corresponding property called the single sub-crossing, then we propose two algorithms for unconstrained quasi-submodular function minimization and maximization, respectively. The proposed algorithms return the reduced lattices in O(n) iterations, and guarantee the objective function values are strictly monotonically increased or decreased after each iteration. Moreover, any local and global optima are definitely contained in the reduced lattices. Experimental results verify the effectiveness and efficiency of the proposed algorithms on lattice reduction.

#### Fast Approximate Matrix Multiplication by Solving Linear Systems. (arXiv:1408.4230v2 [cs.DS] UPDATED)

In this paper, we present novel deterministic algorithms for multiplying two $n \times n$ matrices approximately. Given two matrices $A,B$ we return a matrix $C'$ which is an \emph{approximation} to $C = AB$. We consider the notion of approximate matrix multiplication in which the objective is to make the Frobenius norm of the error matrix $C-C'$ arbitrarily small. Our main contribution is to first reduce the matrix multiplication problem to solving a set of linear equations and then use standard techniques to find an approximate solution to that system in $\tilde{O}(n^2)$ time. To the best of our knowledge this the first examination into designing quadratic time deterministic algorithms for approximate matrix multiplication which guarantee arbitrarily low \emph{absolute error} w.r.t. Frobenius norm.

#### Optimal Polynomial Solution for the Minimum Sum Two Paths Problem

Authors: Costas K. Constantinou, Georgios Ellinas
Abstract: The current paper presents the first optimal polynomial solution to the extensively investigated, long standing problem of finding a pair of disjoint paths with minimum total cost between two sources and two destinations, i.e., to the problem known as the Minimum Sum Two Paths Problem. An algorithm with polynomial time complexity that gives the optimal solution for any arbitrary undirected graph, for both cases of node-disjoint and edge-disjoint paths, is presented in the paper, along with its proof of correctness.

#### Approximate Revenue Maximization in Interdependent Value Settings

Authors: Shuchi Chawla, Hu Fu, Anna Karlin
Abstract: We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.

#### Efficient Online Strategies for Renting Servers in the Cloud

Authors: Shahin Kamali, Alejandro López-Ortiz
Abstract: In Cloud systems, we often deal with jobs that arrive and depart in an online manner. Upon its arrival, a job should be assigned to a server. Each job has a size which defines the amount of resources that it needs. Servers have uniform capacity and, at all times, the total size of jobs assigned to a server should not exceed the capacity. This setting is closely related to the classic bin packing problem. The difference is that, in bin packing, the objective is to minimize the total number of used servers. In the Cloud, however, the charge for each server is proportional to the length of the time interval it is rented for, and the goal is to minimize the cost involved in renting all used servers. Recently, certain bin packing strategies were considered for renting servers in the Cloud [Li et al. SPAA'14]. There, it is proved that all Any-Fit bin packing strategy has a competitive ratio of at least $\mu$, where $\mu$ is the max/min interval length ratio of jobs. It is also shown that First Fit has a competitive ratio of $2\mu + 13$ while Best Fit is not competitive at all. We observe that the lower bound of $\mu$ extends to all online algorithms. We also prove that, surprisingly, Next Fit algorithm has competitive ratio of at most $2 \mu +1$. We also show that a variant of Next Fit achieves a competitive ratio of $K \times max\{1,\mu/(K-1)\}+1$, where $K$ is a parameter of the algorithm. In particular, if the value of $\mu$ is known, the algorithm has a competitive ratio of $\mu+2$; this improves upon the existing upper bound of $\mu+8$. Finally, we introduce a simple algorithm called Move To Front (MTF) which has a competitive ratio of at most $6\mu + 7$ and also promising average-case performance. We experimentally study the average-case performance of different algorithms and observe that the typical behaviour of MTF is distinctively better than other algorithms.

#### Quantified Conjunctive Queries on Partially Ordered Sets

Authors: Simone Bova, Robert Ganian, Stefan Szeider
Abstract: We study the computational problem of checking whether a quantified conjunctive query (a first-order sentence built using only conjunction as Boolean connective) is true in a finite poset (a reflexive, antisymmetric, and transitive directed graph). We prove that the problem is already NP-hard on a certain fixed poset, and investigate structural properties of posets yielding fixed-parameter tractability when the problem is parameterized by the query. Our main algorithmic result is that model checking quantified conjunctive queries on posets of bounded width is fixed-parameter tractable (the width of a poset is the maximum size of a subset of pairwise incomparable elements). We complement our algorithmic result by complexity results with respect to classes of finite posets in a hierarchy of natural poset invariants, establishing its tightness in this sense.

### StackOverflow

#### scala -- syntax to indicate any kind of anonymous function, whatsoever

I'd like to be able to pass in callback functions as parameters to a method. Right now, I can pass in a function of signature () => Unit, as in

def doSomething(fn:() => Unit) {
//... do something
fn()
}


which is fine, I suppose, but I'd like to be able to pass in any function with any parameters and any return type.

Is there a syntax to do that?

Thanks

### Planet Clojure

#### Using Parquet + Protobufs with Spark

I recently had occasion to test out using Parquet with protobufs. I got some simple tests working, and since I had to do a lot of reading to get to this point, I thought I'd do the world a favor and document the process here.

First, some definitions:

Parquet is a column-oriented data storage format for Hadoop from Twitter. Column-oriented storage is really nice for “wide” data, since you can efficiently read just the fields you need.

Protobuf is a data serialization library developed by google. It lets you efficiently and quickly serialize and deserialize data for transport.

Parquet has low-level support for protobufs, which means that if you happen to have protobuf-serialized data, you can use it with parquet as-is to performantly do partial deserialzations and query across that data.

You might do that using spark, a fast mapreduce engine with some nice ease-of-use. Spark can even read from Hadoop, which is nice.

I got a lot of information from this post on doing the same with Avro. I happen to be using Clojure, but I hope you'll be able to follow along anyhow (here's a quick syntax primer). If you want to follow along exactly, you can check out the github repo of my sample project.

The first tricky bit was sorting dependencies out. Some highlights from this process:

• You must exclude the import of javax.servlet:servlet-api from hadoop, and from anything that depends on hadoop. Otherwise, you'll get some issues where this conflicts with spark's version.
• You need to explicitly include a hadoop-client of your preferred version, otherwise Spark will fall back on some undefined client version (Hadoop 1.something)
• You need to import a number of separate parquet projects.

Here's what my project.clj (like maven but shorter) ended up looking like:

(defproject sparkquet "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url "http://example.com/FIXME"
:url "http://www.eclipse.org/legal/epl-v10.html"}
:dependencies [[org.clojure/clojure "1.6.0"]

; Spark wrapper
[yieldbot/flambo "0.3.2"]

; Still need spark & hadoop (must pick specific client)
[org.apache.spark/spark-core_2.10 "1.0.1"]
:exclusions [javax.servlet/servlet-api]] ; Conflicts with spark's

; Parquet stuff
:exclusions [javax.servlet/servlet-api] ]
:exclusions [javax.servlet/servlet-api commons-lang]]

; And, of course, protobufs
]
:java-source-paths ["src/java"]
:source-paths ["src/clj"]
:plugins [[lein-protobuf "0.4.1"]]
)


For this example, we'll be using this simple protobuf:

package sparkquet;

message MyDocument {
enum Category {
THINGS = 1;
STUFF = 2;
CRAP = 3;
}
required string id = 1;
required string name = 2;
required string description = 3;
required Category category = 4;
required uint64 created = 5;
}


You'll need to compile that to a class somehow (I used lein-protobuf).

I'll let the code to most of the talking here. I put some helpful comments in for your benefit:

(ns sparkquet.core
(:require [flambo.conf :as conf]
[flambo.api :as f])
(:import
sparkquet.Document$MyDocument ; Import our protobuf sparkquet.Document$MyDocument$Category ; and our enum sparkquet.OnlyStuff )) (defn make-protobuf "Helper function to make a protobuf from a hashmap. You could also use something like clojure-protobuf: https://github.com/ninjudd/clojure-protobuf" [data] (let [builder (Document$MyDocument/newBuilder)]
(doto builder
(.setId (:id data))
(.setName (:name data))
(.setDescription (:description data))
(.setCategory (:category data))
(.setCreated (:created data)))
(.build builder)))

(defn produce-my-protobufs
"This function serves as a generic source of protobufs. You can replace
this with whatever you like. Perhaps you have a .csv file that you can
open with f/text-file and map to a protobuf? Whatever you like."
[sc]
(f/parallelize
sc
(map make-protobuf [
{:id "1" :name "Thing 1" :description "This is a thing"
:category Document$MyDocument$Category/THINGS :created (System/currentTimeMillis)}
{:id "2" :name "Thing 2" :description "This is a thing"
:category Document$MyDocument$Category/THINGS :created (System/currentTimeMillis)}
{:id "3" :name "Crap 1" :description "This is some crap"
:category Document$MyDocument$Category/CRAP :created (System/currentTimeMillis)}
{:id "4" :name "Stuff 1" :description "This is stuff"
:category Document$MyDocument$Category/STUFF :created (System/currentTimeMillis)}
{:id "5" :name "Stuff 2" :description "This is stuff"
:category Document$MyDocument$Category/STUFF :created (System/currentTimeMillis)}
{:id "6" :name "Stuff 3" :description "This is stuff"
:category Document$MyDocument$Category/STUFF :created (System/currentTimeMillis)}
{:id "7" :name "Stuff 4" :description "This is stuff"
:category Document$MyDocument$Category/STUFF :created (System/currentTimeMillis)}])))

(defn write-protobufs!
[rdd job outfilepath]
(-> rdd
(f/map-to-pair (f/fn [buf] [nil buf])) ; We need to have a PairRDD
outfilepath                 ; Can (probably should) be an hdfs:// url
Void                        ; We don't have a key class, just some protobufs
Document$MyDocument ; Would be a static import + .class in java ParquetOutputFormat ; Use the ParquetOutputFormat (.getConfiguration job)))) ; Protobuf things are present on the job config.)) (defn read-protobufs "Use Spark's .newAPIHadoopFile to load your protobufs" [sc job infilepath] (-> (.newAPIHadoopFile sc infilepath ; Or hdfs:// url ParquetInputFormat Void ; Void key (.newAPIHadoopFile always returns (k,v) pair rdds) Document$MyDocument   ; Protobuf class for value
(. job getConfiguration))

(f/map (f/fn [tup] (._2 tup))))) ; Strip void keys from our pair data.

(defn get-job
"Important initializers for Parquet Protobuf support. Updates a job's configuration"
[]
(let [job (Job.)]

; You need to set the read support and write support classes
(ParquetOutputFormat/setWriteSupportClass job ProtoWriteSupport)

; You also need to tell the writer your protobuf class (reader doesn't need it)
(ProtoParquetOutputFormat/setProtobufClass job Document$MyDocument) job)) (defn -main [] (let [conf (-> (conf/spark-conf) (conf/master "local[4]") ; Run locally with 4 workers (conf/app-name "protobuftest")) sc (f/spark-context conf) ; Create a spark context job (get-job) ; Create a Hadoop job to hold configuration path "hdfs://localhost:9000/user/protobuftest2" ] ; First, we can write our protobufs (-> sc (produce-my-protobufs) ; Get your Protobuf RDD (write-protobufs! job path)) ; Now, we can read them back (-> sc (read-protobufs job path) (f/collect) (first) (.getId) ) ; You can also add a Parquet-level filter on your job to massively improve performance ; when running queries that can be easily pared down. (ParquetInputFormat/setUnboundRecordFilter job OnlyStuff) (-> sc (read-protobufs job path) (f/collect)) ; There should only be the 4 items now. ; If you like, you can set a *projection* on your job. This will read a ; subset of your fields for efficiency. Here's what you might do if you ; just needed names filtered by category: (ProtoParquetInputFormat/setRequestedProjection job "message MyDocument { required binary name; required binary category; }") (-> sc (read-protobufs job path) (f/map (f/fn [buf] (.getName buf))) (f/collect)) ; Remember, the record filter is still applied. )) ; Defs for REPL usage (comment (def conf (-> (conf/spark-conf) (conf/master "local[4]") ; Run locally with 4 workers (conf/app-name "protobuftest"))) (def sc (f/spark-context conf)) ; Create a spark context (def job (get-job)) ; Create a Hadoop job to hold configuration (def path "hdfs://localhost:9000/user/protobuftest4" ) )  Lots of stuff going on here, but some of the trickier bits: • The saveAsNewAPIHadoopFile and newAPIHadoopFile methods exist on and return, respectively, only Pair RDDs. If you have un-keyed data, as we do, you'll need to pack/unpack your data into tuples before/after saving/loading if you want to pretend like you just have a stream of protobufs. Just use Void as the key class when you call the relevant method. • You need to use a hadoop Job object to store and pass around configuration. • You need to set the support classes for your input and output formats. You'll also need to set the protobuf class using setProtobufClass on your ProtoParquetOutputFormat. You don't need to do this on input. ### Filters You can use setUnboundRecordFilter on ParquetInputFormat to do really efficient filtering on your data as you read it. Since Parquet is aware of the protobuf file's layout, it can check only the fields it needs for the filter, and only deserialize the rest of the protobuf if the filter passes. This is very fast. To create a filter, you implement the UnboundRecordFilter interface, which has one method, bind. You can use this method to bind the filter you create with the readers passed to the bind method. I used this one java helper, which implements a filter. This could also be done in clojure with a gen-class, but lein works well enough on java sources that we may as well do it this way. package sparkquet; import parquet.column.ColumnReader; import parquet.filter.RecordFilter; import parquet.filter.ColumnRecordFilter; import parquet.filter.UnboundRecordFilter; import parquet.filter.ColumnPredicates; import static sparkquet.Document.MyDocument; public class OnlyStuff implements UnboundRecordFilter { public RecordFilter bind(Iterable<ColumnReader> readers){ return ColumnRecordFilter.column( "category", ColumnPredicates.equalTo(MyDocument.Category.STUFF) ).bind(readers); } }  ### Projections Parquet's protobuf support will let you define a projection, which is a way of telling it what fields to read (generally a subset of the fields that exist). Since Parquet is a column store, this means it can efficiently read just this data and leave the rest. Defining a projection is an unfortunately poorly-documented procedure. To define a projection, you pass a string to ProtoParquetInputFormat/setRequestedProjection. The string should be a set of field definitions in an apparently-undocumented format that resembles protobuf's. Twitter's Parquet announcement blog post has some examples, but unfortunately the examples are for some different version of Parquet, since Parquet no longer supports a string type (use binary instead). For our example, we use the following to extract name (a string) and category (an enum): message MyDocument { required binary name; required binary category; }  ### Performance I didn't (and won't) do formal benchmarks, so I can only give my rememberances for working on about 6GB of wide data. • Running a mapreduce job after reading the data from CSV took about 90 seconds • Running the same job on protobufs from Parquet took about 130 seconds. The extra 40 seconds was probably deserialization overhead. • Adding projection mapping to trim the 45-odd fields down to the 4 I needed dropped the job to about 60 seconds • Moving the “primary” filter from a Spark filter task to a Parquet filter reduced the time to just 20 seconds. So, in this case, Parquet turned out to be a win. That's it for this post. I hope it helped you figure this thing out. [columnoriented]: ### HN Daily #### Daily Hacker News for 2014-08-19 ### Planet Clojure #### Functional-ish Ruby In a recent Apprentice Blog of the Week, Alex Hill detailed one way that we can apply common Ruby patterns to our Clojure code. I’ve noticed a similar effect while making the opposite transition, too. Having spent a few months writing mostly Clojure and then transitioning back into writing mostly Ruby, it was interesting to see the way my experience with common patterns in Clojure influenced the way I approached writing Ruby. Specifically, a couple of patterns I really enjoy using in Clojure are with and when macros. Usually, a macro starting with with- means that something is happening around the code you pass to it, and a macro starting with when- means that your code will be executed if a certain condition is met. For instance, we could write a macro called with-timing that times our code by setting a start time, evaluating the code, then logging the difference between the start and end times before returning our return value. (defmacro with-timing [body] (let [start# (now) ret# ~body] (logger/log (- (now) start#)) ret#)) (defn timed-operation [x] (with-timing (calculate-some-things x)))  We might also write a macro called when-valid, which takes a record that we created from some user input, and then only evaluates our code if the record is valid, otherwise using the generic handler for invalid records. (defmacro when-valid [record & body] (if (valid? ~record) (render-invalid ~record) ~@body)) (defn response-for [thing] (when-valid thing (render-created thing)))  We can implement our timing macro similarly in Ruby, by simply writing a method that takes a block to be called and timed. def with_timing(&block) start_time = Time.now return_value = block.call log(Time.now - start_time) return_value end def timed_operation(x) with_timing do calculate_some_stuff(x) end end  We can also reduce the duplication of a common Rails controller pattern by writing something similar to our when-valid macro in Ruby. def when_valid(record, &block) if record.valid? block.call(record) else flash[:error] = record.errors.messages render :new end end def create when_valid(Thing.create(thing_params)) do |thing| redirect_to thing end end  Here’s another useful when method for handling HTTP responses in Ruby that Myles Megyesi shared with me. def when_status(response, responders) if responder = responders[response[:status]] responder.call(response) else handle_generically(response) end end def get_all_the_things when_status get("/things"), { 200 => lambda do |response| load_things(response[:body]) end, 404 => lambda do "Whoops" end } end  After transitioning back to writing Ruby after Clojure, I found myself naturally thinking of ways to use blocks and lambdas, among other functional-ish idioms, much more than before writing Clojure—and usually with positive results. It’s interesting to see how learning new languages expands the way you write the languages you already know. Perhaps there are patterns from certain languages you know just waiting to be applied somewhere else. ## August 19, 2014 ### CompsciOverflow #### Exponential-size numbers in NP completeness reduction In the proof of Theorem 4 in [GS'12], the authors reduce an instance of PARTITION to their problem. Therefore, they create for each element$a_i$in the instance of PARTITION a number$2^{c \cdot a_i}$for a suitable constant$c$, which is later used in the reduction. They argue, that the instance remains of polynomial size since these exponential-size numbers can be encoded implicitly. Nevertheless, can we really work with those numbers in polynomial time? What if we add such two numbers$2^{c \cdot a_i}$and$2^{c \cdot a_j}$for$a_i \neq a_j$in the course of the algorithm, then the resulting number cannot be encoded in this way any longer. Is this reduction valid? [GS'12]: Martin Groß and Martin Skutella, "Generalized maximum flows over time", 2012. ### StackOverflow #### ZMQ prevent sending "timedout" messages I wonder how can I "abort" a message after it has not been sent for sometime. The scenario is simple: 1) Client connects to server 2) The server goes down 3) client send a message, there's no issue here as Zmq queues the message locally (so the "send" operation is successful) 4) Assume I've set RCVTIMEO I get the timeout 5) After I got the timeout I no longer wish to send the message, but once the server goes up again Zmq will transmit the message. How can I prevent it? The reason I want to prevent this is because once I got the timeout I responded back to my customer with failure message (e.g "the request could not be processed due to timeout"), and it would be a real issue if eventually his request would get transmitted and processed... Hope my question is clear... Thx! ### Dave Winer Readme: About Little Facebook Editor. With the ability to update posts, Facebook becomes a publishing surface for blogging software. ### StackOverflow #### Clojure/dataset: group-by multiple columns hierarchically? I would like to implement a function that can group-by for multiple columns hierarchically. I can illustrate my requirement by the following tentative implementation for two columns: (defn group-by-two-columns-hierarchically [col1 col2 table] (let [data-by-col1 ($group-by col1 table)
data-further-by-col2 (into {} (for [[k v] data-by-col1] [k ($group-by col2 v)])) ] data-further-by-col2 ))  I'm seeking help how to generalize on arbitrary number of columns. (I understand that Incanter supports group-by for multiple columns but it only provides a structure not hierarchy, a map of composite key of multiple columns to value of datasets.) Thanks for your help! Note: to make Michał's solution work for incanter dataset, only a slight modification is needed, replacing "group-by" by "incanter.core/$group-by", illustrated by the following experiment:

(defn group-by*
"Similar to group-by, but takes a collection of functions and returns
a hierarchically grouped result."
[fs coll]
(if-let [f (first fs)]
(into {} (map (fn [[k vs]]
[k (group-by* (next fs) vs)])
at TestingOutTraits$$anon1.filter(TestingOutTraits.scala:4) at MyTraitclass.filter(TestingOutTraits.scala:34) at TestingOutTraits$$anon$1.filter(TestingOutTraits.scala:4)  thanks, Dean #### ScalaTest runnning only a single test in a suite While developing a test suite for a class, I've began running into situations where ScalaTest would only run a single test, or exclude some of them. ### CompsciOverflow #### Proving the (in)tractability of this Nth prime recurrence As follows from my previous question, I've been playing with the Riemann hypothesis as a matter of recreational mathematics. In the process, I've come to a rather interesting recurrence, and I'm curious as to its name, its reductions, and its tractability towards the solvability of the gap between prime numbers. Tersely speaking, we can define the gap between each prime number as a recurrence of preceding candidate primes. For example, for our base of$p_0 = 2$, the next prime would be:$\qquad \displaystyle p_1 = \min \{ x > p_0 \mid -\cos(2\pi(x+1)/p_0) + 1 = 0) \}$Or, as we see by plotting this out:$p_1 = 3$. We can repeat the process for$n$primes by evaluating each candidate prime recurring forward. Suppose we want to get the next prime,$p_2$. Our candidate function becomes:$\qquad \displaystyle \begin{align} p_2 = \min\{ x > p_1 \mid f_{p_1}(x) + (&(-\cos(2\pi(x+1)/p_1) + 1) \\ \cdot &(-\cos(2\pi(x+2)/p_1) + 1)) = 0\} \end{align}$Where:$\qquad \displaystyle f_{p_1}(x) = -\cos(2\pi(x+1)/p_0) + 1$, as above. It's easy to see that each component function only becomes zero on integer values, and it's equally easy to show how this captures our AND- and XOR-shaped relationships cleverly, by exploiting the properties of addition and multiplication in the context of a system of trigonometric equations. The recurrence becomes:$\qquad f_{p_0} = 0\\ \qquad p_0 = 2\\ \qquad \displaystyle f_{p_n}(x) = f_{p_{n-1}}(x) + \prod_{k=2}^{p_{n-1}} (-\cos(2\pi(x+k-1)/p_{n-1}) + 1)\\ \qquad \displaystyle p_n = \min\left\{ x > p_{n-1} \mid f_{p_n}(x) = 0\right\}$... where the entire problem hinges on whether we can evaluate the$\min$operator over this function in polynomial time. This is, in effect, a generalization of the Sieve of Eratosthenes. Working Python code to demonstrate the recurrence: from math import cos,pi def cosProduct(x,p): """ Handles the cosine product in a handy single function """ ret = 1.0 for k in xrange(2,p+1): ret *= -cos(2*pi*(x+k-1)/p)+1.0 return ret def nthPrime(n): """ Generates the nth prime, where n is a zero-based integer """ # Preconditions: n must be an integer greater than -1 if not isinstance(n,int) or n < 0: raise ValueError("n must be an integer greater than -1") # Base case: the 0th prime is 2, 0th function vacuous if n == 0: return 2,lambda x: 0 # Get the preceding evaluation p_nMinusOne,fn_nMinusOne = nthPrime(n-1) # Define the function for the Nth prime fn_n = lambda x: fn_nMinusOne(x) + cosProduct(x,p_nMinusOne) # Evaluate it (I need a solver here if it's tractable!) for k in xrange(p_nMinusOne+1,int(p_nMinusOne**2.718281828)): if fn_n(k) == 0: p_n = k break # Return the Nth prime and its function return p_n,fn_n  A quick example: >>> [nthPrime(i)[0] for i in range(20)] [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71]  The trouble is, I'm now in way over my head, both mathematically and as a computer scientist. Specifically, I am not competent with Fourier analysis, with defining uniform covers, or with the complex plane in general, and I'm worried that this approach is either flat-out wrong or hides a lurking horror of a 3SAT problem that elevates it to NP-completeness. Thus, I have three questions here: 1. Given my terse recurrence above, is it possible to deterministically compute or estimate the location of the zeroes in polynomial time and space? 2. If so or if not, is it hiding any other subproblems that would make a polytime or polyspace solution intractable? 3. And if by some miracle (1) and (2) hold up, what dynamic programming improvements would you make in satisfying this recurrence, from a high level? Clearly, iteration over the same integers through multiple functions is inelegant and quite wasteful. ### Lobsters #### The frozen const string build reason pattern This pattern generalises to any method where you are currently returning a boolean…. but it is “true” for many different reasons and “false” if and only if none of the other reasons apply. As a concrete example consider this commonish task…. Decide whether or not you must rebuild something depending on whether the output exists and / or whether the output is older than any of the inputs it depends on. Pretty much what “make” does for you. But suppose you are writing a method “must_build?”, the “obvious” return type is boolean true / false. I have settled on a better pattern…. I always return a frozen const string. Why? Because it enables me to efficiently inform my user why I made the choice I did. For example, (speaking ruby, but the pattern translates to other languages) BUILD_REASON_OUTPUT_DOESNT_EXIST = 'Building because output doesnt exist".freeze BUILD_REASON_OLDER_MTIME = 'Rebuilding because output had older mtime than input".freeze BUILD_REASON_DONT_BUILD = 'Not rebuilding because file is up to date'.freeze def must_build? return BUILD_REASON_OUTPUT_DOESNT_EXIST unless FileTest.exists? output_file return BUILD_REASON_OLDER_MTIME if File.stat(output_file).mtime < File.stat(input_file).mtime BUILD_REASON_DONT_BUILD end  Usage…. build_reason = must_build? # Use object identity equal?() if !build_reason.equal?( BUILD_REASON_DONT_BUILD) build end # Log what you are doing and why.... log build_reason  ### StackOverflow #### Java ID3 audio tags lib I'm looking for a good libraries to edit .mp3 tags ID3v(22,23,24) (like author, title, track and this kind of stuff), write in java or clojure, any ideas ? There is some standard "de facto" in this field... I have just look at this question: I need an ID3 tag reader library for Java - preferably a fast one But if there is something more would be great... Perfect would be if the libraries supported not only .mp3 but also .ogg and .wma... Thanks everybody, and sorry for my English... #### Calling scala from R (using jvmr) I tried to integrate some scala-code into R ... unfortunately, I failed: myself@mycomputer:~/$ R

library("jvmr")

a <- scalaInterpreter()


Error in .jcall(.jnew("org.ddahl.jvmr.impl.RScalaInterpreter$"), "Lorg/ddahl/jvmr/impl/RScalaInterpreter;", : java.lang.ClassCastException: scala.tools.nsc.settings.MutableSettings$BooleanSetting cannot be cast to scala.reflect.internal.settings.MutableSettings$SettingValue Any idea? Doing the same with Java (b <- javaInterpreter()) works. Thanks! ### DragonFly BSD Digest #### Moving past ports Here’s a nice advantage for dports and DragonFly: since it’s an overlay on FreeBSD ports, it’s possible to move to newer or different versions of software without waiting for it to happen in FreeBSD. For example: there’s a newer version of the xorg intel driver now in dports - newer than what’s in ports. ### DataTau #### Fixing Bad Data in Datomic ### StackOverflow #### Are some data structures more suitable for functional programming than others? In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program. This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array. So my questions are: • Are there really significantly different usage patterns for data structures between functional and imperative programming? • If so, is this a problem? • What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications? #### How can I begin understanding the Milner-Hindley? I often see notation like this in Haskell papers, but I have no clue what the hell any of it means. I have no idea what branch of mathematics it's supposed to be. I recognize the letters of the Greek alphabet of course, and symbols such as "∉" (which usually means that something is not an element of a set). On the other hand, I've never seen "⊢" before (Wikipedia claims it might mean "partition"). I'm also unfamiliar with the use of the vinculum here. (Usually it denotes a fraction, but that does not appear to be the case here.) I imagine SO is not a good place to be explaining the entire Milner Hindley algorithm. But if somebody could at least tell me where to start looking to comprehend what this sea of symbols means, that would be helpful. (I'm sure I can't be the only person who's wondering...) #### Why are so few things @specialized in Scala's standard library? I've searched for the use of @specialized in the source code of the standard library of Scala 2.8.1. It looks like only a handful of traits and classes use this annotation: Function0, Function1, Function2, Tuple1, Tuple2, Product1, Product2, AbstractFunction0, AbstractFunction1, AbstractFunction2. None of the collection classes are @specialized. Why not? Would this generate too many classes? This means that using collection classes with primitive types is very inefficient, because there will be a lot of unnecessary boxing and unboxing going on. What's the most efficient way to have an immutable list or sequence (with IndexedSeq characteristics) of Ints, avoiding boxing and unboxing? #### Why when using an overloaded constructor "new" is required? When attempting to provide an overloaded constructor as below  case class Neuron(weight: Double, tHold: Double, var isFired: Boolean, inputNeuron: List[Neuron], id: String) { def this() = this(0 , 0 , false , List() , "") } val n1 = Neuron() causes compile time error : not enough arguments for method apply: (weight: Double, tHold: Double, isFired: Boolean, inputNeuron:  So I need to use : val n1 = new Neuron()  But if I remove the overloaded "this" reference I can call the constructor without using "new" : case class Neuron(weight: Double, tHold: Double, var isFired: Boolean, inputNeuron: List[Neuron], id: String) val n = Neuron(0.0,0.0,false,List(),"")  Why do I need to use the "new" in above scenario and why only when using an overloaded constructor is "new" required? #### Zipping two lists into a single list of objects rather than a list of tuples? val l1 = List(1, 2, 3) val l2 = List('a', 'b', 'c') val tupleList = l1.zip(l2) // List((1,a), (2,b), (3,c)) val objectList = l1.zip(l2).map(tuple => new MyObject(tuple._1, tuple._2)) // List(MyObject@7e1a1da6, MyObject@5f7f2382, MyObject@407cf41)  After writing this code, I feel like the map(tuple => new MyObject(tuple._1, tuple._2)) part looks a little dirty for two reasons: 1. I shouldn't be creating the tuples just to discard them in favor of MyObject. Why not just zip l1 and l2 into a list of MyObject in the first place? 2. tuple._1 and tuple._2 don't have any semantics. It can take some mental gymnastics to make sure I'm giving the Int as the first parameter and the Char as the second. Is it possible to zip two Lists into my own object? How can I make the MyObject construction above more semantically clear? ### TheoryOverflow #### What is the complexity of counting the number of solutions of a P-Space Complete problem? How about higher complexity classes? I guess it would be called #P-Space but I have found only one article vaguely mentioning it. How about the counting version of EXP-TIME-Complete, NEXP-Complete as well as EXP-SPACE-Complete problems? Is there any previous work that can one cite in regards to this or any type of inclusion or exclusion like Toda's Theorem? #### May Boolean circuits be exponentially more concise than Boolean formulae? Consider a family$(f_n)_{1 \leq n}$of Boolean functions, where$f_n$is a function on$n$variables. Consider for every$n$the smallest Boolean formula$F_n$describing$f_n$, and the smallest Boolean circuit$C_n$describing$f_n$. Say we have$|F_n| = \Omega(g(|C_n|))$for a certain function$g$. What is the fastest-growing$g$for which this is known to be possible, and the slowest-growing$g$for which it is known to be impossible? (From the comments, it seems like there is still a gap here, but I'm trying to understand which one.) This is the "simple" version of my question. What I am interested in is a multi-output, probabilistic (=weighted), variant of the problem, defined as follows. It is clear how to extend circuits to be multi-output, and I define a$k$-output formula to be just a$k$-tuple of formulas on the same inputs. I say that the input variables have a certain probability of being true (written in binary and accounted for in the circuit or formula size), each independently from the others, and I look at the probability distribution on the tuple of outputs (forgetting which input is yielding which output, just looking at the distribution on values), given this product distribution on the inputs, in the circuit and formula context. Here again the circuits are certainly more concise than formulae, but how much? Are there some distributions that can be exponentially more concise to represent with circuits, intuitively because of sub-expression reuse? To give an example for this more elaborate version, consider the following distribution on$n$outputs: •$000 \cdots 00$with probability$1/2$, •$100 \cdots 00$with probability$1/4$, •$110 \cdots 00$with probability$1/8$, • ... •$111 \cdots 10$with probability$1/2^n$, •$111 \cdots 11$with probability$1/2^n$. There is a multi-output probabilistic circuit of size$O(n)$which generates this (and reuses the draw of the$i$-th bit to draw the$(i+1)$-th). By contrast, the straightforward Boolean function encoding of this is quadratic, and I can't see how you could make it shorter but yet cannot prove it... ### /r/netsec #### Large CHS Medical hack was result of Heartbleed vulnerability in Juniper VPN device #### Reversing the dropbox client on windows ### StackOverflow #### Wiremock with Scalatra I followed the example and attempted to use WireMock to mock an authentication service used by a Scalatra app. However, I don't get WireMock and Scalatra to work together. The idea is to provice a mock response for authentication request sent by Scentry to another auth provider. How to combine typical Scalatra test: def unauthenticated = get("/secured") { status must_== 400 }  with WireMock stub for: stubFor(WireMock.post(urlMatching("/some/auth/service*")) .willReturn( aResponse() .withStatus(200)))  #### What evidence is there that Clojure Zippers would benefit from being expressed as comonads? In this presentation [2005] we read at slide 32: The zipper datatype hides a comonad. This is exactly the comonad one needs to structure attribute evaluation. So it seems you can express Zippers in terms of Comonads. This even seems possible in Scala. Looking at the zipper source, we see zippers expressed as Clojure metadata. My question is, What evidence is there that Clojure Zippers would benefit from being expressed as comonads? Eric suggests the benefit is so we need to get all the possible zippers over the original group! ### /r/compsci #### Computing the mass of a coin based on the sound it makes when it falls ### Lobsters #### Quala: Custom Type Systems for Clang ### DataTau #### Linked Data Analysis with Tensor Factorization [SLIDES] #### Johns Hopkins, SwiftKey and Coursera partner on Data Science Capstone ### StackOverflow #### How can I get automatic dependency resolution in my scala scripts? I'm just learning scala coming out of the groovy/java world. My first script requires a 3rd party library TagSoup for XML/HTML parsing, and I'm loath to have to add it the old school way: that is, downloading TagSoup from its developer website, and then adding it to the class path. Is there a way to resolve third party libraries in my scala scripts? I'm thinking Ivy, I'm thinking Grape. Ideas? The answer that worked best for me was to install n8: curl https://raw.github.com/n8han/conscript/master/setup.sh | sh cs harrah/xsbt --branch v0.11.0  Then I could import tagsoup fairly easily example.scala  /*** libraryDependencies ++= Seq( "org.ccil.cowan.tagsoup" % "tagsoup" % "1.2.1" ) */ def getLocation(address:String) = { ... }  And run using scalas:  scalas example.scala  Thanks for the help! ### Planet Clojure #### Getting started in Clojure… Getting started in Clojure with IntelliJ, Cursive, and Gorilla part 1: setup part 2: workflow From Part 1: This video goes through, step-by-step, how to setup a productive Clojure development environment from scratch. This part looks at getting the software installed and running. The second part to this video (vimeo.com/103812557) then looks at the sort of workflow you could use with this environment. If you follow through both videos you’ll end up with Leiningen, IntelliJ, Cursive Clojure and Gorilla REPL all configured to work together Some links: Nothing surprising but useful you are just starting out. ### StackOverflow #### Clojure, implement range, why this solution doesn't work I want to implement range function of clojure, why following code won't work ? (fn [low high] (loop[low low ret []] (if(= low high) (list ret) (recur (inc low) (concat ret [low])))))  #### error: overloaded method value get with alternatives in getting a point on an image I am using this:  var res = new Array[Byte](1) var u=image.get(p.x,p.y,res)  where: val image= new Mat var p=new Point (3,32)  and having error said: " overloaded method value get with alternatives" Can't figure out the problem. Please help me on that! Thanks! ### CompsciOverflow #### Proving Quicksort has a worst case of O(n²) I am sorting the following list of numbers which is in descending order. I am using QuickSort to sort and it is known that the worst case running time of QuickSort is$O(n^2)$import java.io.File; import java.io.FileNotFoundException; import java.util.*; public class QuickSort { static int pivotversion; static int datacomparison=0; static int datamovement=0; public static void main(String args[]) { Vector<Integer> container = new Vector<Integer>(); String userinput = "data2.txt"; Scanner myScanner = new Scanner("foo"); // variable used to read file Scanner scan = new Scanner(System.in); System.out.println("Enter 1 to set pivot to be first element"); System.out.println("Enter 2 to set pivot to be median of first , middle , last element of the list"); System.out.println("Your choice : "); //pivotversion = scan.nextInt(); try { File inputfile = new File("C:\\Users\\8382c\\workspace\\AdvanceAlgorithmA3_Quicksort\\src\\" + userinput); myScanner = new Scanner(inputfile); } catch(FileNotFoundException e) { System.out.println("File cant be found"); } String line = myScanner.nextLine(); //read 1st line which contains the number of numbers to be sorted while(myScanner.hasNext()) { container.add(myScanner.nextInt()); } System.out.println(line); quickSort(container,0,container.size()-1); for (int i =0;i<container.size();i++) { System.out.println(container.get(i)); } System.out.println("========================="); System.out.println(datamovement); System.out.println(datacomparison); } public static int partition(Vector<Integer> container, int left, int right) { int i = left, j = right; int tmp; int pivot= 0 ; pivot = container.get(left); boolean maxarraybound = false; i++; while (i <= j) { while ( container.get(i) < pivot && maxarraybound == false) { if ( i == container.size()-1 ) { maxarraybound = true; } else { i++; datacomparison++; } } while ( container.get(j) > pivot) { j--; datacomparison++; } if (i <= j) { tmp = container.get(i);// considered data movement?? container.set(i, container.get(j)); datamovement++; container.set(j, tmp); datamovement++; i++; j--; } }; tmp = container.get(left); container.set(left, container.get(i-1)); datamovement++; container.set(i-1, tmp); datamovement++; return i-1; } public static void quickSort(Vector<Integer> container, int left, int right) { int index = partition(container, left, right); if (left < index - 1) quickSort(container, left, index - 1); if (index+1 < right) quickSort(container, index+1, right); } }  I am trying to prove to myself that the worst-case running time of QuickSort is indeed$O(n^2)$by summing up the total number of data comparisons and data movements in the algorithm. In my current situtation, I have an input of 10000 numbers. I would expect a total sum of data comparison and data movement to be around 100 million. I am only getting a total sum of data comparsion and data movement of around 26 million. I am sure I have miss out some "data movement" and "data comparsion" in my algorithm, can someone point out to me where as I have no clue? ### Planet Theory #### Reviewing Scales I'm just about finished reviewing for CoNEXT (Conference on Emerging Networking Experiments and Technologies), and am starting reviewing for ITCS (Innovations in Theoretical Computer Science). One notable variation in the process is the choice of the score scale. For CoNEXT, the program chairs chose a 2-value scale: accept or reject. For ITCS, the program chair chose a 9-point scale. Scoring from 1-9 or 1-10 is not uncommon for theory conferences. I dislike both approaches, but, in the end, believe that it makes minimal difference, so who am I to complain? The accept-or-reject choice is a bit too stark. It hides whether you generously thought this paper should possibly get in if there's room, or whether you really are a champion for the paper. A not-too-unusual situation is a paper gets (at least initially) a majority of accept votes -- but nobody really likes the paper, or has confronted its various flaws. (Or, of course, something similar the other way around, although I believe the first case is more common, as it feels better to accept a close call than to reject one.) Fortunately, I think the chairs have been doing an excellent job (at least on the papers I reviewed) encouraging discussion on such papers as needed to get us to the right place. (Apparently, the chairs aren't just looking at the scores, but reading the reviews!) As long as there's actual discussion, I think the problems of the 2-score solution can be mitigated. The 9 point scale is a bit too diffuse. This is pretty clear. On the description of score semantics we were given, I see: "1-3 : Strong rejects". I'm not sure why we need 3 different numbers to represent a strong reject (strong reject, really strong reject, really really strong reject), but there you have it. The boundaries between "weak reject", "a borderline case" and "weak accept" (scores 4-6) also seem vague, and could easily lead to different people using different interpretations. Still, we'll see how it goes. As long as there's good discussion, I think it will all work out here as well. I prefer the Goldilocks scale of 5 values. I further think "non-linear" scoring is more informative: something like top 5%, top 10%, top 25%, top 50%, bottom 50%, but even scores corresponding to strong accept/weak accept/neutral/weak reject/strong reject seem more useful when trying to make decisions. Finally, as I have to say whenever I'm reviewing, HotCRP is still the best conference management software (at least for me as a reviewer). ### /r/compsci #### My friend has developed this awesome triangulation drawing system! If you are in to computer graphics you need to check this out! ### AWS #### Amazon SNS Update - Large Topics and MPNS Authenticated Mode Amazon Simple Notification Service (SNS) is a fast and flexible push messaging service. You can easily send messages to Apple, Google, Fire OS and Windows devices, including Android devices in China (via Baidu Cloud Push). Today we are enhancing SNS with support for large topics (more than 10,000 subscribers) and authenticated delivery to MPNS (Microsoft Push Notification Service). Large Topics SNS offers two publish modes. First, you can push messages directly to specific mobile devices. Second, you can create an SNS topic, provide your customers with a mechanism to allow them to subscribe to the topic, and then publish messages to the topic with a single API call. This mode is great for broadcasting breaking news, announcing flash deals, and announcing in-game events or new features. You can combine customers from different platforms in the same topic and you can send a specific payload to each platform (for example, one for iOS and another for Android), again in a single call. Suppose you have created the following topic: With the ARN for the topic (arn:aws:sns:us-west-2:xxxxxxxxxxxx:amazon-sns) in hand, here's how you publish a message to all of the subscribers: $result = $client->publish(array( 'TopicArn' => 'arn:aws:sns:us-west-2:xxxxxxxxxxxx:amazon-sns', // Message is required 'Message' => 'Hello Subscribers', 'Subject' => 'Hello' ));  Today we are lifting the limit of 10,000 subscriptions per SNS topic; you can now create as many as you need and no longer need to partition large subscription lists across multiple topics. This has been a frequent request from AWS customers that use SNS to build news and media sharing applications. There is an administrative limit of 10 million subscriptions per topic, but we'll happily raise it if you expect to have more subscribers for a single topic. Fill out the Contact Us form, select SNS, and we'll take good care of you! Authenticated Delivery to MPNS Microsoft Push Notification Service (MPNS) is the push notification relay service for Windows Phone devices prior to Windows 8.1. SNS now supports authenticated delivery to MPNS. In this mode, MPNS does not enforce any limitations on the number of notifications that can be sent to a channel in any given day (per the documentation on Windows Phone Push Mode, there's a daily limit of 500 unauthenticated push notifications per channel). If you require this functionality for devices that run Windows 8.1 and above, please consider using Amazon SNS for Windows Notification Service (WNS). -- Jeff; ### StackOverflow #### Scala simple funsuite unit test with akka actors fails Hey i want to build some small Funsuite test for akka actor application but after combining Testkit with FunSuiteLike i cant call th test anymore. Somebody an idea why this is happening? is Testkit and funsuite not compatible? import org.scalatest.{FunSuiteLike, BeforeAndAfterAll} import akka.testkit.{ImplicitSender, TestKit, TestActorRef} import akka.actor.{ActorSystem} class ActorSynchroTest(_system: ActorSystem) extends TestKit(_system) with FunSuiteLike with BeforeAndAfterAll with ImplicitSender{ val actorRef = TestActorRef(new EbTreeDatabase[Int]) val actor = actorRef.underlyingActor //override def afterAll = TestKit.shutdownActorSystem( system ) test("EbTreeDatabase InsertNewObject is invoked"){ val idList = List(1024L,1025L,1026L,1032L,1033L,1045L,1312L,1800L) idList. foreach(x => actorRef ! EbTreeDataObject[Int](x,x,1,None,null)) var cursor:Long = actor.uIdTree.firstKey() var actorItems:List[Long] = List(cursor) while(cursor!=actor.uIdTree.lastKey()){ cursor = actor.uIdTree.next(cursor) cursor :: actorItems } assert(idList.diff(actorItems) == List()) } }  The intelliJ idea test enviroment says: One or more requested classes are not Suites: model.ActorSynchroTest  #### how can I make a ComboBox with JavaFX using Scala? and populate the combo box as well. doing this (the way it is done in java): ObservableList<String> options = FXCollections.observableArrayList( "Option 1", "Option 2", "Option 3" ); final ComboBox comboBox = new ComboBox(options);  --produces an error #### Implicit class applicable to all Traversable subclasses including Array I've run into a problem trying to create an implicit class applicable to all Traversable subclasses including Array. I tried the following simple example in both Scala 2.11.1 and 2.10.4: implicit class PrintMe[T](a: Traversable[T]) { def printme = for (b <- a) print(b) }  As far as I understand this should allow for an implicit conversion to PrintMe so that printme can be called on any Traversable, including List and Array. E.g: scala> List(1,2,3).printme 123 // Great, works as I expected! scala> Array(1,2,3).printme <console>:23: error: value printme is not a member of Array[Int] Array(1,2,3).printme // Seems like for an Array it doesn't! scala> new PrintMe(Array(1,2,3)).printme 123 // Yet explicitly building a PrintMe from an Array works  What's going on here? Why does the implicit conversion work for a List and not an Array? I understand there has been some trickery adapting java Arrays but looking at the picture below from http://docs.scala-lang.org/overviews/collections/overview.html it certainly seems like Array is meant to behave like a subclass of Traversable. #### What am I doing wrong around adding an additional case class constructor which first transforms it parameters? So, I had a very simple case class: case class StreetSecondary1(designator: String, value: Option[String])  This was working just fine. However, I kept having places where I was parsing a single string into a tuple which was then used to build an instance of this case class: def parse1(values: String): StreetSecondary1 = { val index = values.indexOf(" ") StreetSecondary1.tupled( if (index > -1) //clip off string prior to space as designator and optionally use string after space as value (values.take(index), if (values.size > index + 1) Some(values.drop(index + 1)) else None) else //no space, so only designator could have been provided (values, None) ) }  So, I wanted to refactor all the different places with this same parsing code into the case class like this (but this won't compile): case class StreetSecondary2(designator: String, value: Option[String]) { def this(values: String) = this.tupled(parse(values)) private def parse(values: String): (String, Option[String]) = { val index = values.indexOf(" ") if (index > -1) //clip off string prior to space as designator and optionally use string after space as value (values.take(index), if (values.size > index + 1) Some(values.drop(index + 1)) else None) else //no space, so only designator could have been provided (values, None) } }  It appears there is some chicken/egg problem around adding a case class constructor AND having a function that takes the parameter(s) and transforms them prior to calling the actual constructor. I have fiddled with this (going on many tangents). I then resorted to trying the companion object pathway: object StreetSecondary3 { private def parse(values: String): (String, Option[String]) = { val index = values.indexOf(" ") if (index > -1) //clip off string prior to space as designator and optionally use string after space as value (values.take(index), if (values.size > index + 1) Some(values.drop(index + 1)) else None) else //no space, so only designator could have been provided (values, None) } def apply(values: String): StreetSecondary3 = { val tuple: (String, Option[String]) = parse(values) StreetSecondary3(tuple._1, tuple._2) //Why doesn't .tupled method work here? } } case class StreetSecondary3(designator: String, value: Option[String])  What am I doing wrong in StreetSecondary2? Is there some way to get it to work? Surely there has to be a better simpler way where I am not required to add all the companion object boilerplate present in StreetSecondary3. Is there? Thank you for any feedback and guidance you can give me around this. UPDATE Whew! Lots of lessons learned already. A) the StreetSecondary2 parse method does not use the "this" implicit context in the case class instance being constructed (i.e. it is a static method in Java terms), so it works better moved to the companion object. B) Unfortunately when composing an explicit companion object for a case class, the compiler provided "implicit companion object" is lost. The tupled method (and others, I am guessing - sure wish there was a way to keep it and augment as opposed to blowing it away) were contained in the compiler provided "implicit companion object" and not provided in the new explicit companion object. This was fixed by adding "extends ((String, Option[String]) => StreetSecondary)" to the explicit companion object. C) Here's an updated solution (which also incorporates a more terse version of the parse function with a nod of thanks to Gabriele Petronella): object StreetSecondary4 extends ((String, Option[String]) => StreetSecondary4) { private def parseToTuple(values: String): (String, Option[String]) = { val (designator, value) = values.span(_ != ' ') (designator, Option(value.trim).filter(_.nonEmpty)) } def apply(values: String): StreetSecondary4 = StreetSecondary4.tupled(parseToTuple(values)) } case class StreetSecondary4(designator: String, value: Option[String])  This is barely better in terms of boilerplate than the StreetSecondary3 version. However, it now makes quite a bit more sense due to so much implicit context having now been made explicit. ### CompsciOverflow #### Solve Recurrence Equation Problem [duplicate] This question is an exact duplicate of: I ask this question before, and someone put it as duplicate. i dont know why no one can answer some question, mark question as duplicate. please be kind and let others to know from others. link of my question is here: Solve Recurrence Equation Problem and get 3 positive marks.. !!! How we calculate the answer of following recurrence? $$T(n)=4T\left(\frac{\sqrt{n}}{3}\right)+ \log^2n\,.$$ Any nice solution would be highly appreciated. My solution is to substitute$n=3^m$, giving $$T(3^m)=4T\left(\frac{3^{m/2}}{3}\right)+\log^2 3^m = F(m)=4F((m/2)-1)+m^2=O(m^2logm)= O(\log^2 n \log n \log n)\,.$$ ### StackOverflow #### Utility methods for operating on custom scala class I'd like to define an operator that works on a custom class in Scala. Similar to scala's Array utility methods, such as Array concatenation: val (a, b) = (new Array[Int](4), new Array[Int](3)) val c = Array.concat(a, b)  I'd like to define an operator vaguely as follows: class MyClass { def op():MyClass = { // for instance,, return new MyClass(); } }  to be invoked, like val x = MyClass.op() To provide a more concrete example, suppose that MyClass is an extension of MyAbstractClass // Provided as a utility for the more relevant code below. def randomBoolean():Boolean = { val randomInt = Math.round(Math.random()).toInt if (randomInt == 1 ) return true; else return false; } abstract class MyAbstractClass[T](size:Int) { val stuff = new Array[T](size) def randomClassStuff():Array[T] } class MyClass(size:Int) extends MyAbstractClass[Boolean](size) { def randomClassStuff():Array[Boolean] = { return new Array[Boolean](size) map {x => randomBoolean()} } }  I realize that I could define an object called MyClass with a function called randomClassStuff defined in there, but I'd rather utilize abstract classes to require that extensions of the abstract class provide a method that creates random stuff specific to that class. #### What are the uses for the bindable and callable pattern? I've seen this little snippet of code floating around before and never really taken the time to wrap my head around what it does. var bind = Function.bind; var call = Function.call; var bindable = bind.bind(bind); var callable = bindable(call);  I understand in concept and in practice what .bind and .call do, but what is the benefit, advantage or practical use of creating the bindable and callbable functions above? Below is a contextual example of a use case for bindable. var bound = bindable(db.find, db).apply(null, arguments); var findable = bindable(db.find, db); var bound = findable.apply(null, arguments); var bound = findable(1, 2, 3);  What can this pattern be used for? #### Treating an SQL ResultSet like a Scala Stream When I query a database and receive a (forward-only, read-only) ResultSet back, the ResultSet acts like a list of database rows. I am trying to find some way to treat this ResultSet like a Scala Stream. This will allow such operations as filter, map, etc., while not consuming large amounts of RAM. I implemented a tail-recursive method to extract the individual items, but this requires that all items be in memory at the same time, a problem if the ResultSet is very large: // Iterate through the result set and gather all of the String values into a list // then return that list @tailrec def loop(resultSet: ResultSet, accumulator: List[String] = List()): List[String] = { if (!resultSet.next) accumulator.reverse else { val value = resultSet.getString(1) loop(resultSet, value +: accumulator) } }  #### Understanding Akka, is it more than a long running process management service? In web applications you often need to run certain tasks offline or asych i.e. not on the same thread being used to service web requests. # Scenerio An ecommerce site connecting to 3rd party API's to validate and charge a credit card and return a response. Is this something you would do using Akka? Why would one choose akka over just creating a regular long-running java daemon that polls some sort of a queue? #### Gatling - Looping through JSON array I have a block of code which needs to loop through a JSON array which is obtained from response of a REST service. (Full gist available here.) .exec(http("Request_1") .post("/endPoint") .headers(headers_1) .body(StringBody("""REQUEST_BODY""")).asJSON .check(jsonPath("$.result").is("SUCCESS"))
.check(jsonPath("$.data[*]").findAll.saveAs("pList"))) .exec(session => { println(session) session }) .foreach("${pList}", "player"){
exec(session => {
val playerId = JsonPath.query("$.playerId", "${player}")
session.set("playerId", playerId)
})
.exec(http("Request_1")
.post("/endPoint")
.body(StringBody("""{"playerId":"${playerId}"}""")).asJSON .check(jsonPath("$.result").is("SUCCESS")))

}


The response format of the first request was

{
"result": "SUCCESS",
"data": [
{
"playerId": 2
},
{
"playerId": 3
},
{
"playerId": 4
}
]
}


And playerId shows up in the session as

pList -> Vector({playerId=2, score=200}, {playerId=3, score=200}


I am seeing in the second request the body is

{"playerId":"Right(empty iterator)}


Expected : 3 requests with body as

 {"playerId":1}
{"playerId":2}
{"playerId":3}


I can loop over the resulting array successfully if I save just the playerIds:

.check(jsonPath("$.data[*].playerId").findAll.saveAs("pList")))  ### CompsciOverflow #### Time complexity of naive look-and-say sequence algorithm I've been looking at the look-and-say sequence for the past few days and I've been wondering what is the time complexity of a naive algorithm to print the nth element. Here is an example in python: def look_and_say(n): prev = '1' for i in range(n-1): count = 1 next = '' prchar = prev[0] for char in prev[1:]: if char == prchar: count += 1 else: next += str(count)+prchar prchar = char count = 1 next += str(count)+prchar prev = next print prev  the problem is that I am not sure how to handle the varying length of each element. Any help is appreciated. ### Dave Winer The new software: Little Facebook Editor. ### StackOverflow #### Throttling messages from RabbitMQ using RxJava I'm using RxJava to pull out values from RabbitMQ. Here's the code: val amqp = new RabbitQueue("queueName") val obs = Observable[String](subscr => while (true) subscr onNext amqp.next) obs subscribe ( s => println(s"String from rabbitmq:$s"),
error => amqp.connection.close
)


It works fine but now I have a requirement that a value should be pulled at most once per second while all the values should be preserved (so debounce won't do since it drops intermediary values).

It should be like amqp.next blocks thread so we're waiting... (RabbitMQ got two messages in queue) pulled a 1st message... wait 1 second... pulled a 2nd message... wait indefinitely for the next message...

How can I achieve this using rx methods?

### Fefe

#### Ihr habt ja bestimmt vom IT-Sicherheitsgesetz gehört, ...

Ihr habt ja bestimmt vom IT-Sicherheitsgesetz gehört, dass unser Innenminister da gerade mit großem Tamtam durchs Dorf treibt. Und ihr werdet euch bestimmt gedacht haben: Hmm, De Maiziere zum Thema Hacken ist eine Geschichte voller Missverständnisse!1!! Das kann doch nur voll in die Hose gehen? Wo ist der Pferdefuß?

Nun, liebe Leser, das will ich gerne aufklären: Die Kohle aus dem neuen IT-Sicherheitsgesetz fließt in den "Verfassungsschutz". Ja, der Verfassungsschutz! Die, die wie kaum eine andere Behörde für das Gegenteil von Sicherheit stehen! Die, die den Ku-Klux-Klan Deutschland, den NSU und andere verfassungsfeindliche Organisationen entweder selbst gegründet und/oder jahrelang per V-Mann-Spende am Leben gehalten haben, ausgerechnet die werden jetzt nicht zugemacht sondern kriegen noch neue Planstellen dazu! Und dann nennen sie das auch noch Sicherheitsgesetz!

Wenn das nicht unsere Steuergelder wären, wäre es fast eine schöne Aktion zur Monty-Python-Abschlußtour.

Oh, einen noch. Falls jemand dachte, hey, der Verfassungsschutz ist furchtbar, aber immerhin sind sie noch nicht mit Trojanern in der Hand ertappt worden, wie das BKA! Für den habe ich eine schlechte Nachricht:

Das Bundesinnenministerium möchte mit seinem jetzt veröffentlichten Referentenentwurf für ein IT-Sicherheitsgesetz neben dem Bundesamt für Sicherheit in der Informationstechnik (BSI) und dem Bundeskriminalamt (BKA) auch das Bundesamt für Verfassungsschutz (BfV) ausbauen.

Wer sich jetzt wundert, dass der Zoll und BND nicht auch noch einen Geldregen kriegen: Der BND untersteht dem Kanzleramt, der Zoll dem JustizFinanzministerium. Der Referentenentwurf kommt aus dem Innenministerium. Die geben in ihren Gesetzesentwürfen natürlich nur sich selbst Geld, nicht anderen Ministerien. Denn darum geht es hier. Geld her, egal wie dümmlich der Vorwand auch ist.

Update: Vielleicht sollte ich auch auf inhaltliche Aspekte eingehen. Nur so der Vollständigkeit halber. Firmen müssen jetzt Hackerangriffe melden, aber a) nur "den Behörden", nicht der Bevölkerung, und b)

können sie das anonym tun, solange es keine Störungen oder Ausfälle gibt
Und wer soll das schon prüfen oder im Nachhinein überhaupt noch sagen können, ob einer der vielen Ausfälle auf Hackerangriffe zurückzuführen war oder nicht.

An dieser Stelle sei mir erlaubt, kurz darauf hinzuweisen, wieso die Idee "Hackerangriffe müssen gemeldet werden" überhaupt auf dem Tisch lag. Damit Firmen und Behörden diese Peinlichkeit vermeiden wollen. Das soll einen Anreiz schaffen, die Infrastruktur robust und sicher zu machen. Dafür gibt im Moment niemand wirklich Geld aus, weil es betriebswirtschaftlich kurzfristig mehr Profit bringt, wenn man es in Kundenacquise steckt. Diese zentrale Idee wird also mit diesem Gesetzesentwurf komplett ad absurdum geführt. Ursprünglich ging die Idee ja noch weiter, die Firmen hätten auch alle potentiell betroffenen Kunden jeweils individuell in Kenntnis setzen sollen. Damit die Firma befürchten muss, die Kunden laufen wenn.

Oh und wieso betrifft das eigentlich nur Firmen und nicht auch Behörden? Und wieso nur kritische Infrastruktur, nicht auch ganz normale Infrastruktur? Wer definiert eigentlich, welche Infrastruktur kritisch ist und welche nicht?

Update: Ich finde übrigens, wenn man da richtig Anreiz schaffen will, muss man auch gleich gesetzlich ein Sonderkündigungsrecht schaffen, wenn eine Firma beim Schlampen erwischt wird. Und zwar nicht nur für die betroffenen Kunden, für alle Kunden. Was glaubt ihr, wie eilig es die Knebelvertrag-Telcos plötzlich hätten, nie wieder gehackt zu werden!

### StackOverflow

#### Publishing to Sonatype via SBT

I am attempting to publish a scala library to the OSS Sonatype repository via SBT. I have followed the SBT guides for Publishing0 & Using Sonatype and reviewed the Sonatype requirements documentation, but cannot seem to publish my artifacts. All attempts end with java.io.IOException: Access to URL [...] was refused by the server: Forbidden. I have had the necessary repository setup done in the Sonatype JIRA system. I have created a PGP key and published it to hkp://pool.sks-keyservers.net & hkp://keyserver.ubuntu.com.

build.sbt

import play.twirl.sbt.SbtTwirl

name := "spring-mvc-twirl"

organization := "us.hexcoder"

version := "1.0.0-SNAPSHOT"

scalaVersion := "2.11.2"

sbtVersion := "0.13.5"

lazy val root = (project in file(".")).enablePlugins(SbtTwirl)

// Removed for brevity
libraryDependencies ++= Seq()

// Test dependencies
// Removed for brevity
libraryDependencies ++= Seq()

// Publish configurations
publishMavenStyle := true

publishArtifact in Test := false

publishTo := {
val nexus = "https://oss.sonatype.org/"
if (isSnapshot.value)
Some("snapshots" at nexus + "content/repositories/snapshots")
else
Some("releases"  at nexus + "service/local/staging/deploy/maven2")
}

homepage := Some(url("https://github.com/67726e/Spring-MVC-Twirl"))

credentials += Credentials(Path.userHome / ".sbt" / ".credentials")

pomIncludeRepository := { _ => false }

// Additional POM information for releases
pomExtra :=
<developers>
<developer>
<name>Glenn Nelson</name>
<email>glenn@hexcoder.us</email>
</developer>
</developers>
<scm>
<connection>scm:git:git@github.com:67726e/Spring-MVC-Twirl.git</connection>
<developerConnection>scm:git:git@github.com:67726e/Spring-MVC-Twirl.git</developerConnection>
<url>git@github.com:67726e/Spring-MVC-Twirl.git</url>
</scm>


SBT Output:

> publishSigned
[info] Wrote /Users/67726e/Documents/Spring-MVC-Twirl/target/scala-2.11/spring-mvc-twirl_2.11-1.0.0-SNAPSHOT.pom
[info] :: delivering :: us.hexcoder#spring-mvc-twirl_2.11;1.0.0-SNAPSHOT :: 1.0.0-SNAPSHOT :: integration :: Tue Aug 19 09:57:13 EDT 2014
[info]  delivering ivy file to /Users/67726e/Documents/Spring-MVC-Twirl/target/scala-2.11/ivy-1.0.0-SNAPSHOT.xml
[trace] Stack trace suppressed: run last *:publishSigned for the full output.
[error] (*:publishSigned) java.io.IOException: Access to URL https://oss.sonatype.org/content/repositories/snapshots/us/hexcoder/spring-mvc-twirl_2.11/1.0.0-SNAPSHOT/spring-mvc-twirl_2.11-1.0.0-SNAPSHOT-sources.jar was refused by the server: Forbidden
[error] Total time: 5 s, completed Aug 19, 2014 9:57:18 AM
> last *:publishSigned
at org.apache.ivy.util.url.AbstractURLHandler.validatePutStatusCode(AbstractURLHandler.java:79)
at org.apache.ivy.util.FileUtil.copy(FileUtil.java:150)
at org.apache.ivy.plugins.repository.url.URLRepository.put(URLRepository.java:84)


#### Scalatest 2.10 with akka.TestKit, weird compilier error

I'm using the scala IDE for development. I have a few actors which I'm testing out. I wrote one scala test suite with the following definition and didn't have any problems:

import org.scalatest._
import akka.testkit._
import akka.actor.ActorSystem
import org.scalatest.BeforeAndAfterAll
import org.scalatest._
import scala.concurrent.duration._
import akka.actor.Props
import filters._

class ReaderSourceTest( _system: ActorSystem ) extends TestKit( _system ) with FunSuiteLike with BeforeAndAfterAll with ImplicitSender {

//Must have a zero argument constructor
def this() = this( ActorSystem( "ReaderSourceSuite" ) )

override def afterAll = TestKit.shutdownActorSystem( system )

test( "Reader should be alive as an actor" ) {

expectMsg( Pong( "Hello" ) )
}
}


I then created another test file to test another actor which goes like this:

import socketclient._
import org.scalatest._
import akka.testkit._
import akka.actor.ActorSystem
import org.scalatest.BeforeAndAfterAll
import scala.concurrent.duration._
import akka.actor.Props
import org.scalatest.fixture.FunSuiteLike
import org.kdawg.CommProtocol.CommMessages._
import org.kdawg.CommProtocol.CommMessages

class NetworkTest( _system: ActorSystem ) extends TestKit( _system ) with FunSuiteLike with BeforeAndAfterAll with ImplicitSender
{
import NetworkTalker._
def this() = this( ActorSystem( "NetworkTalkerTest") )

override def afterAll = TestKit.shutdownActorSystem( system )
test( "Can Send a Packet" )
{
val net = system.actorOf( NetworkTalker.props("10.1.0.5", 31000), "TestA" )
val pktBuilder = CommMessage.newBuilder
pktBuilder.setType( MessageType.STATUS_REQUEST )
pktBuilder.setStatusRequest( CommProtocol.CommandsProtos.StatusRequest.newBuilder() )
val pkt = pktBuilder.build
net ! PktSend(1, pkt)
expectMsg( PktSent(1) )
}
}


I keep getting the following error on the last line of the above class

Multiple markers at this line
- type mismatch; found : org.kdawg.socketclient.NetworkTalker.PktSent required: NetworkTalkerTest.this.FixtureParam =>
Any
- type mismatch; found : org.kdawg.socketclient.NetworkTalker.PktSent required: NetworkTalkerTest.this.FixtureParam =>


Can anyone help me figure this out ?

#### How do I undo a transaction in datomic?

I committed a transaction to datomic accidentally and I want to "undo" the whole transaction. I know exactly which transaction it is and I can see its datoms, but I don't know how to get from there to a rolled-back transaction.

### /r/compsci

#### Chromebook for Computer Science?

Hey guys, I was wondering would the Chromebook be fine for the sort of work in Computer Science. I'll most likely be installing Ubuntu on it, and was just curious as I am starting in about 5 weeks.

Feedback is appreciated.

submitted by Melliano

### StackOverflow

#### Can't get javascriptRoutes to work with Play Framework 2

I'm trying to use javascriptRoutes in Play 2 (Scala) and I am getting an error (see below). Here is what I did:

### Add javascriptRoutes method to Application controller

def javascriptRoutes = Action { implicit request =>
import routes.javascript._
Ok(Routes.javascriptRouter("jsRoutes")(Orders.searchProducts))
.as("text/javascript")
}


### Add route to routes file

GET    /assets/javascripts/routes    controllers.Application.javascriptRoutes


### Add <script> import to main.scala.html

<head>
...
<script type="text/javascript" src="@routes.Application.javascriptRoutes"></script>
...


With these changes in place I am getting the following error in the JavaScript console:

GET http://localhost:9000/assets/javascripts/routes 404 (Not Found)
Uncaught ReferenceError: jsRoutes is not defined


What am I missing?

### CompsciOverflow

#### Vehicle Routing Frameworks in Python [on hold]

Are there any frameworks like optaPlanner or VRP in python out there (preferably on an apache license) which give solutions for Vehicle Routing Problems and/or Traveling Salesman Problem with Profits.

Any pointers would be appreciated.

### StackOverflow

#### Why does Scala not infer the type parameters when pattern matching with @

I'm using Scala 2.10.4 with akka 2.3.4. I ran into a problem where type inference is not behaving the way I expected.

The code below illustrates an example of what I am experiencing. I have a case class which wraps messages with an id named MyMessage. It is parameterized with the type of the message. Then I have a payload named MyPayload which contains a String.

Within an actor (here I'm just using a regular object named MyObject since the problem isn't particular to akka) I am pattern matching and calling a function that operates on my payload type MyPayload.

package so

object MyObject {
case m @ MyMessage(id, MyPayload(s)) =>

// Doesn't compile

// Compiles
}

println(m)
}
}


For reasons I don't understand, pattern patching with @ and an unapplied case class doesn't infer the type parameter of MyMessage[T]. In the code above, I would have expected that m would have type MyMessage[MyPayload]. However, when I compile, it believes that the type is MyMessage[Any].

[error] PatternMatch.scala:9: type mismatch;
[error]  found   : so.MyMessage[Any]
[error] Note: Any >: so.MyPayload, but class MyMessage is invariant in type T.
[error] You may wish to define T as -T instead. (SLS 4.5)
[error]                      ^
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 1 s, completed Aug 19, 2014 12:08:04 PM


Is this expected behavior? If so, what have I misunderstood about type inference in Scala?

#### Scheme: given a list of lists and a permutation, permute

I am practicing for my programming paradigms exam and working through problem sets I come to this problem. This is the first problem after reversing and joining lists recursively, so I suppose there is an elegant recursive solution.

I am given a list of lists and a permutation. I should permute every list including a list of lists with that specified permutation.

I am given an example:

->(permute '((1 2 3) (a b c) (5 6 7)) '(1 3 2))
->((1 3 2) (5 7 6) (a c b))


I have no idea even how to start. I need to formulate the problem in recursive interpretation to be able to solve it, but I can not figure out how.

### StackOverflow

#### Ansible - Define Inventory at run time

I am liitle new to ansible so bear with me if my questions are a bit basic.

Scenario:

I have a few group of Remote hosts such as [EPCs] [Clients] and [Testers] I am able to configure them just the way I want them to be.

Problem:

I need to write a playbook, which when runs, asks the user for the inventory at run time. As an example when a playbook is run the user should be prompted in the following way: "Enter the number of EPCs you want to configure" "Enter the number of clients you want to configure" "Enter the number of testers you want to configure"

What should happen:

Now for instance the user enters 2,5 and 8 respectively. Now the playbook should only address the first 2 nodes in the group [EPCs], the first 5 nodes in group [Clients] and the first 7 nodes in the group [Testers] . I don't want to create a large number of sub-groups, for instance if I have 20 EPCs, then I don't want to define 20 groups for my EPCs, I want somewhat of a dynamic inventory, which should automatically configure the machines according to the user input at run time using the vars_prompt option or something similar to that

Let me post a partial part of my playbook for better understanding of what is to happen:

---
- hosts: epcs # Now this is the part where I need a lot of flexibility

vars_prompt:

gather_facts: no

- name: Check if path exists
stat: path=/home/khan/Desktop/tobefetched/file1.txt
register: st

- name: It exists
debug: msg='Path existence verified!'
when: st.stat.exists

- name: It doesn't exist
debug: msg="Path does not exist"
when: st.stat.exists == false

- name: Copy file2 if it exists
fetch: src=/home/khan/Desktop/tobefetched/file2.txt dest=/home/khan/Desktop/fetched/   flat=yes
when: st.stat.exists

- name: Run remotescript.sh and save the output of script to output.txt on the Desktop
shell: cd /home/imran/Desktop; ./remotescript.sh > output.txt

- name: Find and replace a word in a file placed on the remote node using variables
shell: cd /home/imran/Desktop/tobefetched; sed -i 's/{{name}}/{{quest}}/g' file1.txt

tags:
- replace


@gli I tried your solution, I have a group in my inventory named test with two nodes in it. When I enter 0..1 I get:

TASK: [echo sequence] *********************************************************
changed: [vm2] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix1)


Similarly when I enter 1..2 I get:

TASK: [echo sequence] *********************************************************
changed: [vm2] => (item=some_prefix1)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix2)
changed: [vm1] => (item=some_prefix2)


Likewise when I enter 4..5 (nodes not even present in the inventory, I get:

TASK: [echo sequence] *********************************************************
changed: [vm1] => (item=some_prefix4)
changed: [vm2] => (item=some_prefix4)
changed: [vm1] => (item=some_prefix5)
changed: [vm2] => (item=some_prefix5)


Any help would be really appreciated. Thanks!

### StackOverflow

#### How to determine type of Seq[A] without Reflection API [duplicate]

Is it possible to determine type of Seq[A] in Scala 2.11.2?

For example:

val fruit = List("apples", "oranges", "pears")
val nums = List(1, 2, 3, 4)


I want to print type of Seq something like this:

scala> def printType[A](xs: Seq[A]): Unit = xs match {
| case x: List[String] => println("String")
| case y: List[Int] => println("Int")
| }

<console>:8: warning: non-variable type argument String in type pattern List[Str
ing] (the underlying of List[String]) is unchecked since it is eliminated by era
sure
case x: List[String] => println("String")
^
<console>:9: warning: non-variable type argument Int in type pattern List[Int] (
the underlying of List[Int]) is unchecked since it is eliminated by erasure
case y: List[Int] => println("Int")
^
<console>:9: warning: unreachable code
case y: List[Int] => println("Int")
^
printType: [A](xs: Seq[A])Unit


p.s. I'm new in Scala.

UPDATE:

Is there solution without using Reflection API?

#### Apply a list of parameters to a list of functions

I have a list of parameters like List(1,2,3,"abc","c") and a set of functions which validates the data present in the list like isNumberEven, isAValidString etc.

Currently, I take each value of the list and apply proper function which validates the data like isNumberEven(params(0)). This has led to big and messy code which is completly imperative in thinking.

I am expecting that it should be possible to do something like this in Scala -

List(1,2,3,"abc","c").zip(List(fuctions)).foreach{ x => x._2(x._1)}


However, this fails giving a runtime exception of type mismatch:

error: type mismatch; found : x._1.type (with underlying type Any) required: Int with String

I tried pattern matching on Function traits but it fails due to type erasure.

Any pointers will be appreciated as how can this be solved.

#### Heads up: rcctl(8) the rc.conf.local management tool landing in base soon

Antoine Jacoutot (ajacoutot@) has just committed committed a tool for managing rc.conf.local(8), in order to make it simpler for automated management systems such as Puppet or Ansible to interface with the operating system configuration:

CVSROOT:	/cvs
Module name:	src
Changes by:	ajacoutot@cvs.openbsd.org	2014/08/19 08:08:20

usr.sbin/rcctl : Makefile rcctl.8 rcctl.sh

Log message:
Introduce rcctl(8), a simple utility for maintaining rc.conf.local(8).

# rcctl
usage: rcctl enable|disable|status|action [service [flags [...]]]

Lots of man page improvement from the usual suspects (jmc@ and schwarze@)
not hooked up yet but committing now so work can continue in-tree
agreed by several


### StackOverflow

#### seq to vec conversion - Key must be integer

I want to get the indices of nil elements in a vector eg. [1 nil 3 nil nil 4 3 nil] => [1 3 4 7]

(defn nil-indices [vec]
(vec (remove nil? (map
#(if (= (second %) nil) (first %))
(partition-all 2 (interleave (range (count vec)) vec)))))
)


Running this code results in

java.lang.IllegalArgumentException: Key must be integer (NO_SOURCE_FILE:0)

If I leave out the (vec) call surrounding everything, it seems to work, but returns a sequence instead of a vector.

Thank you!

### TheoryOverflow

#### Example of a $U^\omega$ that is not Deterministic Büchi recognizable

Is there a regular language $U$, for which $U^\omega$ is not a Deterministic Büchi recognizable language. I have been thinking over it for some time, but have been unable to come up with an example.

### StackOverflow

#### How does orElse work on PartialFunctions

I am getting very bizarre behavior (at least it seems to me) with the orElse method defined on PartialFunction

It would seem to me that:

val a = PartialFunction[String, Unit] {
case "hello" => println("Bye")
}
val b: PartialFunction[Any, Unit] = a.orElse(PartialFunction.empty[Any, Unit])
a("hello") // "Bye"
a("bogus") // MatchError
b("bogus") // Nothing
b(true)    // Nothing


makes sense but this is not how it is behaving and I am having a lot of trouble understanding why as the types signatures seem to indicate what I exposed above.

Here is a transcript of what I am observing with Scala 2.11.2:

Welcome to Scala version 2.11.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_11).
Type in expressions to have them evaluated.

scala> val a = PartialFunction[String, Unit] {
| case "hello" => println("Bye")
| }
a: PartialFunction[String,Unit] = <function1>

scala> a("hello")
Bye

scala> a("bye")
scala.MatchError: bye (of class java.lang.String)
at $anonfun$1.apply(<console>:7)
at $anonfun$1.apply(<console>:7)
at scala.PartialFunction$$anonfunapply1.applyOrElse(PartialFunction.scala:242) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) ... 33 elided scala> val b = a.orElse(PartialFunction.empty[Any, Unit]) b: PartialFunction[String,Unit] = <function1> scala> b("sdf") scala.MatchError: sdf (of class java.lang.String) at anonfun1.apply(<console>:7) at anonfun1.apply(<console>:7) at scala.PartialFunction$$anonfun$apply$1.applyOrElse(PartialFunction.scala:242)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162) ... 33 elided  Note the return type of val b which has not widen the type of the PartialFunction. But this also does not work as expected: scala> val c = a.orElse(PartialFunction.empty[String, Unit]) c: PartialFunction[String,Unit] = <function1> scala> c("sdfsdf") scala.MatchError: sdfsdf (of class java.lang.String) at$anonfun$1.apply(<console>:7) at$anonfun$1.apply(<console>:7) at scala.PartialFunction$$anonfun$apply$1.applyOrElse(PartialFunction.scala:242) at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
... 33 elided


### StackOverflow

#### Optional params with Play 2 and Swagger

I'm trying to use Swagger to document a Play 2 REST API but swagger-play2 doesn't seem to understand optional parameters defined with Scala's Option type - the normal way to make a param optional in Play 2:

GET /documents controllers.DocumentController.getDocuments(q: Option[String])


I want the q param to be optional. There is a matching annotated controller method with this Option[String] param. On startup I'm getting UNKOWN TYPE in the log and the json produced by api-docs breaks swagger-ui:

UNKNOWN TYPE: scala.Option
[info] play - Application started (Dev)
`

Is there another way to specify an optional parameter in Play 2 and have Swagger understand it?

### /r/scala

#### Beginner question: how do you use scala documentation?

So I am trying to teach myself scala, and one of the first things I wanted to do is read a text file, preferably line by line. There is nothing about files in the scala Getting Started or in Guides and Overviews or in the Tutorials. So I try googling for it and the top post is a forum question (http://www.scala-lang.org/old/node/5415) where they recommend scala.io.Source.fromPath("filename"). This fails: value fromPath is not a member of object scala.io.Source.

At this point I'm a little annoyed, but I try once again to go to the docs, this time the API docs for scala.io.Source: http://www.scala-lang.org/api/current/index.html#scala.io.Source . There is a long list of methods, but looking at it I can't seem to find any constructors, and there's certainly no fromPath function.

Internet failing me, I pick up Programming in Scala by Martin Odersky, and sure enough early on he recommends "source.fromFile", which works just fine. But source.fromFile isn't in the API page either! More searching, and I realize googling for "scala io source" brings up the Source class, not the source Object, which is what I needed. There is not any link to the source Object from the Source class page, at least that I could see.

So I've come away feeling like I shouldn't waste my time trying to slog through scala's official documentation for anything. Is there a point of scala proficiency where navigating the API becomes easier? Is googling and searching through stackoverflow the best way to find a function that does what I want?

submitted by emeraldemon