Planet Primates

August 04, 2014

AWS

AWS Week in Review - July 28, 2014

Let's take a quick look at what happened in AWS-land last week:

Monday, July 28
Tuesday, July 29
Wednesday, July 30
Thursday, July 31
Friday, August 1

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at August 04, 2014 07:00 AM

July 29, 2014

DataTau

StackOverflow

SBT Scala Assembly Plugin

How to enable Assmbly plugin on my repo. https://github.com/rmuhamedgaliev/skala-skeleton i tryed fix them. But can't run this with command sbt assembly]

assembly [error] Not a valid command: assembly [error] Not a valid project ID: assembly [error] Expected ':' (if selecting a configuration) [error] Not a valid key: assembly [error] assembly [error] ^ exit

by Rinat Mukhamedgaliev at July 29, 2014 07:56 PM

How to Mock a function in Scala?

An attempt has been done to mock a function in Scala after reading this and this documentation without success.

Test

test("randomInteger") {
  val m = mock[NumberSequences]
  (m.randomInteger(5) _).when().returns(5) 
  assert(m === 5)
}

Main

object NumberSequences {
  def randomInteger(a: Int) : Int = {
    scala.util.Random.nextInt(a) + 1
  }
}

Outcome

\numbersequences\scala\NumberSequencesTest.scala:56: not found
: type NumberSequences
[error]     val m = mock[NumberSequences]
[error]                  ^
[error] three errors found
[error] (test:compile) Compilation failed
[error] Total time: 1 s, completed Jul 29, 2014 9:25:30 PM

by utrecht at July 29, 2014 07:52 PM

How to inject play.api.db.slick.Config.driver.simple.Session inside a slick DAO component

I'm using the cake pattern for injecting dependencies between components in a play 2.2.1 application. Application is composed of play controllers and we use a custom ActionBuilder to open our DB session. We currently pass that DB session all the way back to our model layer through the controller and DAO layers as implicit argument. (ActionBuilder -> Controller -> DAO -> Slick Model)

I use play-slick for slick integration and try to use a DAO approach to encapsulate access to our slick models. Our DAOs have several function definitions like findById(id: Int)(implicit s: Session): Option[Entity]. I would like to avoid that implicit session parameter in every single function definition by injecting a DBSession-retrieving component. This component would be invoked inside the DAO function blocks everytime to retrieve the current request db session.

Coming from the Java and Spring world, I don't know exactly how to achieve that given that I probably can't rely on any ThreadLocal scoped proxy.

Any idea how I would be able to achieve that? Is this a good or bad idea?

by jplmelanson at July 29, 2014 07:31 PM

CompsciOverflow

Are neural networks dynamical systems?

Dynamical systems are those whose evolution can be described by a rule, evolves with time and is deterministic. In this context can I say that Neural networks have a rule of evolution which is the activation function $f(\text{sum of product of weights and features})$ ?

Are neural networks

  1. dynamical systems,
  2. linear or nonlinear dynamical systems?

Can somebody please shed some light on this?

by Ria George at July 29, 2014 07:30 PM

StackOverflow

Scala Boolean to String conversions

One way to convert true: Boolean onto a String is

scala> true.toString
res: String = true

However,

scala> true.asInstanceOf[String]
java.lang.ClassCastException: java.lang.Boolean cannot be cast to java.lang.String

Why the latter attempt fails ?

Many Thanks

by enzyme at July 29, 2014 07:27 PM

How to synchronize data amongst devices in Wi-Fi

I am developing an app for iOS and Android. The basic functionality is to keep a certain set of data synchronized across all devices in a Wi-Fi network without a central server. Every device can modify that set of data.

The current approach is to discover the other devices via Bonjour/Zeroconf and then send the "change messages" to all devices via ZeroMQ.

As both Frameworks cause quite a lot of problems to implement I am asking if this is the correct way to accomplish this.

I had most of the logic implemented with Bonjour and HTTP-Requests sent to all devices. The problem was simply network requests which would not get received even after three tries because the network failed. I want to have some kind of reconstruction of a general state or a more reliable messaging framework.

Would some kind of Gossip approach to spread the information as well as discover all devices be better?

by Lukas Leitinger at July 29, 2014 07:25 PM

CompsciOverflow

How to decode multiple-digit gamma codes and get the gap sequence?

How to decode gamma code ($\gamma$ code):

1110001110101011111101101111011

and get the gap sequence?

Detailed information about Gamma codes ($\gamma$ codes) with brief example of decoding can be found here.

But in their example there is only case when gamma code ($\gamma$ code) consists of one digit only, how to deal with multiple digits binary string?

by Mike at July 29, 2014 07:10 PM

StackOverflow

Clojure wrap-json-response returning 404

I am learning to use Closure/Compojure and I am having problems on building a small web app.

I have the following routes defined on mywebapp/routes.clj

(defroutes app-routes
  (GET "/" [] (index-page))
  (GET "/about" [] (about-page))
  (GET "/bluebutton" [] (bluebutton-page))
  (GET "/bluebutton/patient" [] (patient-handler))
  (route/resources "/")
  (route/not-found "No page"))

And the one that is not working /bluebutton/patient, where I am expecting to have a JSON response with the following code:

(use '[ring.middleware.json :only [wrap-json-response]]
     '[ring.util.response :only [response]])

(defn patient-handler []
  (println "patient-handler")
  (wrap-json-response (response {:body {:foo "bar"}})))

For some reason I am getting a 404 response on my browser but I am checking on REPL output that I am executing the code of patient-handler, do you guys know If I am missing something?

Thanks in advance! And sorry for my weird english!

by tillonation at July 29, 2014 07:01 PM

Halfbakery

StackOverflow

How to TDD the creation of a Random Integer Range in Scala using Regex?

The aim is to TDD a Random Integer Range from 1 until 10 in Scala using Regex.

Test

test("randomInteger") {
  assert(NumberSequences.randomInteger(10) === 1|2|3|4|5|6|7|8|9|10)
}

Main

def randomInteger(a: Int) : Int = {
  scala.util.Random.nextInt(a) + 1 
}

Outcome

> test
[error] 
numbersequences\scala\NumberSequencesTest.scala:53: 
value | is not a member of Option[String]
[error]     assert(NumberSequences.randomInteger(10) === 1|2|3|4|5|6|7|8|9|10)
[error]                                                   ^
[error] one error found
[error] (test:compile) Compilation failed
[error] Total time: 1 s, completed Jul 29, 2014 8:25:28 PM

by utrecht at July 29, 2014 07:00 PM

UnixOverflow

How can I boot the PC-BSD live DVD-ISO IMAGE directly via GRUB2?

Via the loopback command, GRUB2 allows to directly boot an ISO file.

Now, I've configured the according menuentry to boot the PC-BSD Live DVD ISO, but when I try to boot it, the FreeBSD bootstrap loader outputs:

can't load 'kernel'

Here is the GRUB2 menuentry I currently use:

menuentry "PC-BSD" {
        search --no-floppy --fs-uuid --set root 0d11c28a-7186-43b9-ae33-b4bd351c60ad
        loopback loop /PCBSD9.0-RC1-x64-DVD-live.iso
        kfreebsd (loop)/boot/loader
}

Does one know how I'd need to amend that in order to be able to boot the PC-BSD live system?

by user569825 at July 29, 2014 06:58 PM

/r/scala

Scala 2.12 Will Only Support Java 8

The community will be stuck at 2.11 and Java 7 for a long time.

An InfoQ interview with Adriaan Moors and Jason Zaugg about this change and how Scala will be making use of Java 8's lambdas implementation.

submitted by ashawley
[link] [comment]

July 29, 2014 06:58 PM

/r/netsec

StackOverflow

ZeroMQ pub/sub with subscriber in PHP that cannot receive messages from publisher in C#

I'm using ZeroMQ to facilitate a publish/subscribe environment I'm needing. Both the pub and sub are running on the localhost.

I implemented pub with C#:

            var options = new Options();
            var parser = new CommandLineParser(new CommandLineParserSettings(Console.Error));
            if (!parser.ParseArguments(args, options))
                Environment.Exit(1);

            using (var ctx = ZmqContext.Create())
            {
                using (var socket = ctx.CreateSocket(SocketType.PUB))
                {                   
                    foreach (var endPoint in options.bindEndPoints)
                        socket.Bind(endPoint);

                    long msgCptr = 0;
                    int msgIndex = 0;
                    while (true)
                    {
                        if (msgCptr == long.MaxValue)
                            msgCptr = 0;
                        msgCptr++;
                        if (options.maxMessage >= 0)
                            if (msgCptr > options.maxMessage)
                                break;                        
                        if (msgIndex == options.altMessages.Count())
                            msgIndex = 0;
                        var msg = options.altMessages[msgIndex++].Replace("#nb#", msgCptr.ToString("d2"));                        
                        Thread.Sleep(options.delay);
                        Console.WriteLine("Publishing: " + msg);
                        socket.Send(msg, Encoding.UTF8);
                    }
                }
            }

sub implemented in python:

def main():

    test_connect = "tcp://127.0.0.1:5000"
    test_topic = ""

    connect_to = test_connect
    topics = test_topic

    ctx = zmq.Context()
    s = ctx.socket(zmq.SUB)
    s.setsockopt(zmq.SUBSCRIBE, "")
    s.connect(connect_to)

    print "Receiving messages on All Topics ..."

    while True:
        print "try to receive"
        objA = s.recv()
        print objA

I ran the sub first and then the pub. But the sub cannot receive any message from the pub. I don't know why.

I have tested with pub in python while sub in php and pub/sub in python. Both of them worked well. But when the pub or sub turned out to be implemented in C#, problems arose.

How can I fix this problem?

by sakonque at July 29, 2014 06:52 PM

lazily convert a String with words to a Stream of words

Given a string with words and whitespaces such as "aaa bbb ccc ddd", can you lazily convert this to a stream that splits the string by white space such as Stream("aaa", ???)? Is creating an iterator first required?

by kfer38 at July 29, 2014 06:38 PM

/r/compsci

/r/netsec

/r/emacs

A package on a league of its own: Helm (v2.0 release)

Click here for the updated guide

Since the previous version of this guide posted here, the guide is updated with more goodies:

  • Extend configuration updated: do not manually bind commands with C-c h since helm binds various commands to prefix C-x c by dfeault. Instead, we reuse the prefix key, and if we want to change the prefix key, execute this expression:

    (setq helm-command-prefix-key "C-c h")

    And get many bindings with C-c hfor free.

  • helm-mini got updated. Add alternative commands and update usage to make it more precise.

  • Live grep got its own section.

  • helm-find got updated. Interactively search file with find using Helm

  • helm-locate got updated. Interactively search file with locate using Helm.

  • helm-info-* got updated. Read info the Helm way.

  • helm-regexp added. Test out your regexp interactively.

  • helm-register added. View what's in your registers.

  • helm-list-emacs-process added.

  • helm-top added. Manage system processes the Helm way - the easy way.

  • helm-surfraw added. Search anything with many search services through surfraw using Helm.

  • helm-google-suggest added. Interactively gets google results from Emacs, using Helm.

  • helm-color added. With this command, theme customization is a breeze.

  • helm-eval-expression-with-eldoc added. Eval expression and see feedback for every character you typed.

  • helm-calcul-expression added. This is a Calc with Helm interface.

  • Fix typos.

submitted by tuhdo
[link] [2 comments]

July 29, 2014 06:24 PM

/r/netsec

Lobsters

/r/netsec

QuantOverflow

Heat/Diffusion Equation

Iam working on a problem where I have successfully reduced a version of Black Scholes to the Heat Equation and then shown the solution to be:

$$u(x,t)=\frac{1}{2\sqrt{t\pi}}\int_{-\infty}^\infty{f(\xi)e^{-\frac{(x-\xi)^2}{4t}}}d\xi$$

the integral is from -infinity to infinity

I now need to show that if $f(x)$ is continuous then $$\lim_{t\rightarrow 0+}u(x,t)=f(x)$$

I am not looking for the solution but guidance in where to start as I need to be able to complete this off my own back.

Thanks for any help!

by UnknownUser at July 29, 2014 06:07 PM

StackOverflow

Scala io.Source.fromfile not finding my file even with absolute path specified

I am trying to access a file in Scala using io.Source.fromfile. i have specified the full path, but i am still getting a no such directory or file error.

This is a general version of what my code looks like:

val lines = io.Source.fromFile("~/top/next/source/resources/desiredFile.txt").getLines()

I'm running Ubuntu if that makes any difference.

by KHall at July 29, 2014 06:07 PM

QuantOverflow

proper choice of risk aversion parameter in the risk-sensitive cost-criterion

Suppose I want to minimize certain risk sensitive cost. Is it a valid question to ask what is the proper (also in which sense) choice of risk aversion parameter in the risk-sensitive cost-criterion ? It has to be positive to be risk-aversive. But, the more it will be it will be better. Right? Is there any trade-off here ?

by user11679 at July 29, 2014 06:05 PM

Portland Pattern Repository

CompsciOverflow

Why does an acceptor send the highest numbered proposal with number less than n as a response to prepare(n) in paxos?

I was reading the Paxos notes from yale from the following link:

http://pine.cs.yale.edu/pinewiki/Paxos

Recall that Paxos is a distributed system algorithm with the goal that the processes participating in its protocol will reach consensus on one of the valid values.

I was trying to better understand the revoting mechanism to avoid deadlocks in Paxos. The revoting mechanism is explained as follows in the article:

The revoting mechanism now works like this: before taking a vote, a proposer tests the waters by sending a prepare(n) message to all accepters where n is the proposal number. An accepter responds to this with a promise never to accept any proposal with a number less than n (so that old proposals don't suddenly get ratified) together with the highest-numbered proposal that the accepter has accepted (so that the proposer can substitute this value for its own, in case the previous value was in fact ratified).

The bold section is the one that I was trying to understand better. The author tries to justify it with:

"...so that the proposer can substitute this value for its own, in case the previous value was in fact ratified..."

But I didn't really understand why the proposer would want to ratify the previous value. By doing this, what crucial safety property is he guaranteeing? Why is he responding with that and not something else? Responding with the highest could be problem, right, since the current proposal would get lost?

by Pinocchio at July 29, 2014 05:59 PM

StackOverflow

Idiomatic way to prepare dynamic response structure

so I have following response coming(trimmed down) from elasticsearch.

 "aggregations": {
    "top_makes": {
        "buckets": [
            {
                "key": "toyota",
                "doc_count": 129,
                "avg_length": {
                    "value": 57.002
                },
                avg_year : {
                    "value" : 2008
                },
                "top_models": {
                    "buckets": [
                        {
                            "key": "corolla",
                            "doc_count": 30,
                            "top_res": {
                                "hits": {
                                    "total": 30,
                                    "max_score": 1,
                                    "hits": [
                                        {
                                            "_index": "cars",
                                            "_type": "car",
                                            "_id": "85",
                                            "_score": 1,
                                            "_source": {

                                                "make": "Toyota",
                                                "color": "Yellow",
                                                "year": 2010
                                            }
                                        }, 

I have following defs (only printing for a lame test) using Elastic Search Java client API which consumes above response.

def getAggregations(aggres: Option[Aggregations]) : Option [Iterable[Any]]= {
aggres map { agg =>
  val aggS = agg.asMap().asScala
  aggS map {
    case (name, termAgg: Terms) => getBuckets (Option(termAgg.getBuckets()) )
    case (name, topHits: TopHits) =>
      val tHits = Option(topHits.getHits())
      tHits map { th => getTopHits(th.asScala)

      }
    case (h, a: InternalAvg) => println( h +"=>" + a.getValue())

  }

  }
}

def getBuckets(buckets: Option[java.util.Collection[Bucket]]) = {
buckets map { bks =>
  val bksS = bks.asScala
  bksS map { b =>
    println("Bucket Key =>" + b.getKey())
    println("Doc count =>" + b.getDocCount())
    getAggregations(Option(b.getAggregations()))
  }
}

}

def getTopHits(topHits: Iterable[org.elasticsearch.search.SearchHit]) = {
  topHits.foreach(h => println("source =>" + h.getSource()))
}

As you can see response structure is recursive and response structure could be changing too. e.g. if next time user wanted to create more nested aggregations (buckets) response will have more deep JSON coming back.

In Scala I have to map this JSON response to some kind of case classes so web service consumer can have a stable (some sort of generic response format). What's the best way to map incoming (sort of open ended) JSON response to some sort of(typed) case class structure?

by user2066049 at July 29, 2014 05:59 PM

High availability deployment with Ansible

I'm using Ansible to deploy pairs of NGinx/Tomcat instances and I'm trying to improve availability during deployment.

A logical instance is 1 NGinx + 1 Tomcat: I have 4 logical instances spread over 2 distant locations (see hosts file bellow).

I launch one playbook called deploy.xml which looks like this:

- hosts: ngx-servers
  tasks:
    - include: tasks/remove-server.yml

- hosts: app-servers
  roles:
    - role: app-server

- hosts: ngx-servers
  tasks:
    - include: tasks/add-server.yml

What I want is to deploy 50% of my 4 logical instances before deploying the others (and stop everything if something goes wrong). One solution could be targeting montigny-app-servers/montigny-ngx-servers first (instead of app-servers/ngx-servers) and then the second location but I would need to duplicate playbook content (and so on if I need to add other server locations).

Any idea to make it properly ?

Here is my hosts file:

#
# Serveurs d'application
#

# Montigny
[montigny-app-servers]
mo-app-server-1 ansible_ssh_host=1y.domain.fr ansible_ssh_user=devops
mo-app-server-2 ansible_ssh_host=2y.domain.fr ansible_ssh_user=devops

# Bievre
[bievre-app-servers]
bi-app-server-1 ansible_ssh_host=1b.domain.fr ansible_ssh_user=devops
bi-app-server-2 ansible_ssh_host=2b.domain.fr ansible_ssh_user=devops

# Tous
[app-servers:children]
montigny-app-servers
bievre-app-servers

#
# Serveurs NGinx
#

# Montigny
[montigny-ngx-servers]
mo-ngx-server-1 ansible_ssh_host=1y.domain.fr ansible_ssh_user=devops
mo-ngx-server-2 ansible_ssh_host=2y.domain.fr ansible_ssh_user=devops

# Bievre
[bievre-ngx-servers]
bi-ngx-server-1 ansible_ssh_host=1b.domain.fr ansible_ssh_user=devops
bi-ngx-server-2 ansible_ssh_host=2b.domain.fr ansible_ssh_user=devops

# Tous
[ngx-servers:children]
montigny-ngx-servers
bievre-ngx-servers

by pfevrier at July 29, 2014 05:57 PM

QuantOverflow

Should I analyze the tick data day by day?

Let assume that we have one month of tick data which were traded at NYSE. We want to model the price changes as a function of the last p lags of price changes and the last q lags of the time duration between trades ( this is similar to the GARMA(p,q) model). Each day we use the data from 9:30 until 16:00. My question is: in order to analyze the data should I analyze each day separately?

by F.F. at July 29, 2014 05:29 PM

StackOverflow

When I run activator, I see java.io.IOException: No such file or directory

When I run activator I see this. I'm building a scala app using the play framework. It was working fine for a while and all of a sudden I get this.

$ activator

java.io.IOException: No such file or directory
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createNewFile(File.java:1006)
    at xsbt.boot.Locks$.apply0(Locks.scala:35)
    at xsbt.boot.Locks$.apply(Locks.scala:28)
    at xsbt.boot.Launch.locked(Launch.scala:240)
    at xsbt.boot.Launch.app(Launch.scala:149)
    at xsbt.boot.Launch.app(Launch.scala:147)
    at xsbt.boot.Launch$.run(Launch.scala:102)
    at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:36)
    at xsbt.boot.Launch$.launch(Launch.scala:117)
    at xsbt.boot.Launch$.apply(Launch.scala:19)
    at xsbt.boot.Boot$.runImpl(Boot.scala:44)
    at xsbt.boot.Boot$.main(Boot.scala:20)
    at xsbt.boot.Boot.main(Boot.scala)
Error during sbt execution: java.io.IOException: No such file or directory

Any ideas? I even tried reinstalling activator and re-cloning the repo for the app.

by Inbl at July 29, 2014 05:17 PM

/r/clojure

Planet Clojure

Functional Geekery Episode 13 – Martin J. Logan

In this episode I talk with Martin J. Logan. We cover his experience with Erlang, why OTP, his book Erlang and OTP in Action, designing processes in an actor based system, Erlang Camp and more.

Our Guest, Martin J. Logan

http://blog.erlware.org
@martinjlogan on Twitter
@erlangcamp on Twitter
@erlware on Twitter

Topics

Martin’s Background
Why Threads are a Bad Idea by John Ousterhaut
Erlware
How was the adjustment to learning Erlang
Why Object Oriented Programming never made sense as taught
Erlang as an Object Oriented language
Pattern matching, binary streams, and gen_fsm behavior
How Martin was able to stay in Erlang since 1999
Learning Erlang through the mailing list
How the Erlang community has evolved over time
Erlang and OTP in Action
Motivation of writing Erlang and OTP in Action
Why they took the approach to Erlang and OTP in Action they did
Martin and his co-authors as Mr. Miyagi teaching Erlang and OTP
Reticular activation
Practicality as the goal of the book
Ability to distribute systems
Location transparency in Erlang
Aptness of metaphor of Erlang processes as “micro-services”
How to determine right granularity of Erlang processes
Library applications and active applications
Designing for Actor Based Systems
Processes modeled as truly concurrent activities
Erlang Camp
Chicago Erlang user group
“At the end of this user group we are going to announce we are having a conference in the fall”
Teach basics of Erlang and dive into Erlang in two intense days
Repeat attendees help to coordinate the next Erlang Camps
Chicago Erlang Conference
Garrett Smith
LambdaJam from Alex Miller and Dave Thomas
Possibility of a second edition of Erlang and OTP in Action
DevOps.com

A giant Thank You to David Belcher for the logo design.

by Steven Proctor at July 29, 2014 05:11 PM

StackOverflow

Adding sets of numbers up to 16

I have some sets of numbers:

(#{7 1} #{3 5} #{6 3 2 5} 
 #{0 7 1 8} #{0 4 8} #{7 1 3 5} 
 #{6 2} #{0 3 5 8} #{4 3 5} 
 #{4 6 2} #{0 6 2 8} #{4} #{0 8} 
 #{7 1 6 2} #{7 1 4})

I wish to make each set into a four number vector, such that the sum of all the vectors add up to 16 and they can only come from the set of numbers:

 #{7 1}   => [1 1 7 7]
 #{4 3 5} => [3 4 4 5]
 #{4}     => [4 4 4 4]
 #{0 8}   => [0 0 8 8]

How would the clojure code be written.

by zcaudate at July 29, 2014 05:08 PM

Overcoming Bias

Adam Ford & I on Great Filter

Adam Ford interviewed me again, this time on the Great Filter:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter.

by Robin Hanson at July 29, 2014 05:00 PM

QuantOverflow

ex ante tracking error correlation between funds

I have two portfolio's called Comb & Global. They both have the same investable universe lets says 3000 stocks & are measured against the same benchmark. So it is possible that both funds hold the same stocks. I would like to examine the correlation of the ex-ante between the two funds.

I know I can calculate the ex-ante tracking error as below,

te = sqrt((port_wgt - bm_wgt)' * cov_matrix * (port_wgt - bm_wgt))

I also know the correlation is calculated by

 p = cov(x,y) / stdev(x) * stdev(y)

I was wondering the best way to calculate the ex ante correlation between the two funds? Is there a relationship between the two funds weights that I can make use of?

Update

I should have mentioned that the two portfolios are sub portfolios and are combined into one portfolio. So I wanted to see the correlation of the ex ante tracking error between the two sub portfolio's.

I realised I can do the following,

port_wgts - number_of_companies x 2 matrix
cov_matrix - number_of_companies x number_of_companies matrix

so the below line will return a 2x2 covariance matrix.

port_wgts' * cov_matrix * prt_wgts

So I have the variances of both sub portfolios - taking the square root of this gives me the tracking error for both.

Convert the 2 X 2 covariance matrix to a correlation matrix by the following

  D = Diag(cov_matrix)^(1/2)
  corr_matrix = D^-1 * cov_matrix * D^-1

So I now have the correlation between the two sub portfolios just using the weights.

by mHelpMe at July 29, 2014 04:59 PM

StackOverflow

Scala View Bounds Chaining Issue

I know view bounds may be deprecated soon. Please ignore that.

The following code compiles if only one of the last 3 implicit conversions are uncommented. Is this a compiler bug?

object Viewable extends App {
  /** Speed of light in m/s */
  val C: Double = 299293458d

  /** @param weight in kilograms */
  case class Matter(name: String, weight: Double) {
    /** @return matter-energy equivalence in megajoules */
    def energy: Double = weight * C * C / 1000000d

    def megaJouleMsg: String = f"$name's mass-energy equivalence is $energy%.0f megajoules."
  }

  case class Animal(name: String, height: Double, weight: Double)
  case class Vegetable(name: String, height: Double, weight: Double)
  case class Mineral(name: String, weight: Double)

  case class Bug(name: String, height: Double, weight: Double, canFly: Boolean)
  case class Whale(name: String, height: Double, weight: Double, hasTeeth: Boolean)

  case class AppleTree(name: String, height: Double, weight: Double, age: Int)
  case class Grass(name: String, height: Double, weight: Double, edible: Boolean)

  case class Sand(name: String, color: String, weight: Double)
  case class Rock(name: String, color: String, weight: Double)

  implicit def sandToMineral(sand: Sand) = Mineral(sand.name, sand.weight)
  implicit def rockToMineral(rock: Rock) = Mineral(rock.name, rock.weight)

  implicit def appleTreeToVegetable(tree: AppleTree) = Vegetable(tree.name,  tree.height,  tree.weight)
  implicit def grassToVegetable(grass: Grass)        = Vegetable(grass.name, grass.height, grass.weight)

  implicit def bugToAnimal(bug: Bug)       = Animal(bug.name, bug.height, bug.weight)
  implicit def whaleToAnimal(whale: Whale) = Animal(whale.name, whale.height, whale.weight)

  implicit def animalToMatter[X <% Animal](animal: X)          = Matter(animal.name,    animal.weight)
  implicit def vegetableToMatter[X <% Vegetable](vegetable: X) = Matter(vegetable.name, vegetable.weight)
  implicit def mineralToMatter[X <% Mineral](mineral: X)       = Matter(mineral.name,   mineral.weight)

  println(Animal("Poodle", 1.0, 8.0).megaJouleMsg)
  println(AppleTree("Spartan", 2.3, 26.2, 12).megaJouleMsg)
  println(Rock("Quartz crystal", "white", 2.3).megaJouleMsg)
}

The error messages are:

type mismatch;
 found   : solutions.Viewable.Animal
 required: ?{def megaJouleMsg: ?}
Note that implicit conversions are not applicable because they are ambiguous:
 both method animalToMatter in object Viewable of type [X](animal: X)(implicit evidence$1: X => solutions.Viewable.Animal)solutions.Viewable.Matter
 and method vegetableToMatter in object Viewable of type [X](vegetable: X)(implicit evidence$2: X => solutions.Viewable.Vegetable)solutions.Viewable.Matter
 are possible conversion functions from solutions.Viewable.Animal to ?{def megaJouleMsg: ?}
  println(Animal("Poodle", 1.0, 8.0).megaJouleMsg)
                ^

type mismatch;
 found   : solutions.Viewable.AppleTree
 required: ?{def megaJouleMsg: ?}
Note that implicit conversions are not applicable because they are ambiguous:
 both method animalToMatter in object Viewable of type [X](animal: X)(implicit evidence$1: X => solutions.Viewable.Animal)solutions.Viewable.Matter
 and method vegetableToMatter in object Viewable of type [X](vegetable: X)(implicit evidence$2: X => solutions.Viewable.Vegetable)solutions.Viewable.Matter
 are possible conversion functions from solutions.Viewable.AppleTree to ?{def megaJouleMsg: ?}
  println(AppleTree("Spartan", 2.3, 26.2, 12).megaJouleMsg)
                   ^

by Mike Slinn at July 29, 2014 04:58 PM

Ignoring parameters in routes

One of the nice things about Play is that it doesn't dictate your URL format. That's great because I'm porting an application and need to retain backwards compatibility with old URLs.

I want to match all URLs that start with /foo/bar:

/foo/bar
/foo/bar/0
/foo/bar/1
/foo/bar/baz

How do I do this?

I can't find much documentation. I found old docs for 1.2.7 that said

You can tell Play that you want to match both URLs by adding a question mark after the trailing slash. For example:

GET /clients/? Clients.index

The URI pattern cannot have any optional part except for that trailing slash.

That's a little funny to be its own special case. IDK how much is still true, since it isn't in the current documentation.


I tried

GET     /foo/bar             Foo.bar
GET     /foo/bar/*unused     Foo.bar

and

GET     /foo/bar             Foo.bar
GET     /foo/bar/$unused<.*> Foo.bar

but compilation failed.

Compilation error[Missing parameter in call definition: unused]


Finally, I tried redefining Foo.bar to take a junk argument (with default as the empty string).

GET     /foo/bar             Foo.bar
GET     /foo/bar/*unused     Foo.bar(unused)

and

GET     /foo/bar             Foo.bar
GET     /foo/bar/$unused<.*> Foo.bar(unused)

but it still didn't work.

conflicting symbols both originated in file '/home/paul/my-play-project/target/scala-2.10/src_managed/main/routes_reverseRouting.scala'


How do I match URL prefixes, or ignore parameters?

by Paul Draper at July 29, 2014 04:57 PM

Why is this Clojure program so slow? How to make it run fast?

Here it is clearly explained how to optimize a Clojure program dealing with primitive values: use type annotations and unchecked math, and it will run fast:

(set! *unchecked-math* true)

(defn add-up ^long [^long n]
  (loop [n n i 0 sum 0]
    (if (< n i)
      sum
      (recur n (inc i) (+ i sum)))))

So, just out of curiosity, I've tried it in lein repl and, to my surprise, found this code running ~20 times slower that expected (Clojure 1.6.0 on Oracle JDK 1.8.0_11 x64):

user=> (time (add-up 1e8))
"Elapsed time: 2719.188432 msecs"
5000000050000000

Equivalent code in Scala 2.10.4 (same JVM) runs in ~90ms:

def addup(n: Long) = { 
  @annotation.tailrec def sum(s: Long, i: Long): Long = 
    if (i == 0) s else sum(s + i, i - 1)
  sum(0, n)
}

So, what am I missing in the Clojure code sample? Why is it so slow (should theoretically be roughly the same speed)?

by Ivan Mikushin at July 29, 2014 04:55 PM

Christian Neukirchen

29jul2014

by Christian Neukirchen (chneukirchen@gmail.com) at July 29, 2014 04:53 PM

StackOverflow

What precisely is the algorithm used by java.lang.Object's hashCode

What is the algorithm used in the JVM to implement java.lang.Object's implicit hashCode() method?

[OpenJDK or Oracle JDK are preferred in the answers].

by djhaskin987 at July 29, 2014 04:52 PM

Lobsters

StackOverflow

Closures in Scala vs Closures in Java

Some time ago Oracle decided that adding Closures to Java 8 would be an good idea. I wonder how design problems are solved there in comparison to Scala, which had closures since day one.

Citing the Open Issues from javac.info:

  1. Can Method Handles be used for Function Types? It isn't obvious how to make that work. One problem is that Method Handles reify type parameters, but in a way that interferes with function subtyping.

  2. Can we get rid of the explicit declaration of "throws" type parameters? The idea would be to use disjuntive type inference whenever the declared bound is a checked exception type. This is not strictly backward compatible, but it's unlikely to break real existing code. We probably can't get rid of "throws" in the type argument, however, due to syntactic ambiguity.

  3. Disallow @Shared on old-style loop index variables

  4. Handle interfaces like Comparator that define more than one method, all but one of which will be implemented by a method inherited from Object. The definition of "interface with a single method" should count only methods that would not be implemented by a method in Object and should count multiple methods as one if implementing one of them would implement them all. Mainly, this requires a more precise specification of what it means for an interface to have only a single abstract method.

  5. Specify mapping from function types to interfaces: names, parameters, etc. We should fully specify the mapping from function types to system-generated interfaces precisely.

  6. Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters. Similarly, the subtype relationships used by the closure conversion should be reflected as well.

  7. Elided exception type parameters to help retrofit exception transparency. Perhaps make elided exception type parameters mean the bound. This enables retrofitting existing generic interfaces that don't have a type parameter for the exception, such as java.util.concurrent.Callable, by adding a new generic exception parameter.

  8. How are class literals for function types formed? Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?

  9. The system class loader should dynamically generate function type interfaces. The interfaces corresponding to function types should be generated on demand by the bootstrap class loader, so they can be shared among all user code. For the prototype, we may have javac generate these interfaces so prototype-generated code can run on stock (JDK5-6) VMs.

  10. Must the evaluation of a lambda expression produce a fresh object each time? Hopefully not. If a lambda captures no variables from an enclosing scope, for example, it can be allocated statically. Similarly, in other situations a lambda could be moved out of an inner loop if it doesn't capture any variables declared inside the loop. It would therefore be best if the specification promises nothing about the reference identity of the result of a lambda expression, so such optimizations can be done by the compiler.

As far as I understand 2., 6. and 7. aren't a problem in Scala, because Scala doesn't use Checked Exceptions as some sort of "Shadow type-system" like Java.

What about the rest?

by soc at July 29, 2014 04:49 PM

Efficiency of casting with TypeTag and try/catch

I've got a function that essentially needs to cast objects a lot of times. So the casting method used in the function should be fast and light enough. Here are two methods that I know:

  • Use TypeTags to cast safely
  • Try casting and catch the error

The first method, however, it's not possible to carry TypeTags with objects implicitly in the function for reasons, so there will be a wraping class of sorts that holds an object and its TypeTag:

trait Typed[A] {
  val data: A
  val ttag: TypeTag[A]
}
def cast[A, B: TypeTag](t: Typed[A]): Option[B] = {
  if (typeOf[B] <:< typeOf[A]) Some(t.data.asInstanceOf[B])
  else None
}

The second one just tries/catches:

import scala.util.control.Exception._
def cast[A, B](a: A): Option[B] = {
  allCatch opt a.asInstanceOf[B]
}

I've not taken care of efficiency stuff before, but this function is going to be pretty large, and it needs to be fast for the usage. Please tell me which way should be faster!

by Ryoichiro Oka at July 29, 2014 04:48 PM

Scala: Create new list of same type

I'm stuck and the solutions Google offered me (not that many) didn't work somehow. It sounds trivial but kept me busy for two hours now (maybe I should go for a walk...).

I've got a list of type XY, oldList: List[XY] with elements in it. All I need is a new, empty List of the same type.

I've already tried stuff like:

newList[classOf(oldList[0])]
newList = oldList.clone()
newList.clear()

But it didn't work some how or takes MutableList, which I don't like. :/

Is there a best (or any working) practice to create a new List of a certain type?

Grateful for any advice, Teapot

P.S. please don't be too harsh if it's simple, I'm new to Scala. :(

by Teapot at July 29, 2014 04:42 PM

How to create a method which invokes another service and return a Future?

I want to define a method, which will return a Future. And in this method, it will call another service which returns also a Future.

We have defined a BusinessResult to represent Success and Fail:

object validation {
  trait BusinessResult[+V] {
    def flatMap[T](f: V => BusinessResult[T]):BusinessResult[T]
    def map[T](f: V => T): BusinessResult[T]
  }

  sealed case class Success[V](t:V) extends BusinessResult[V] {
    def flatMap[T](f: V => BusinessResult[T]):BusinessResult[T] = {
      f(t)
    }
    def map[T](f: V => T): BusinessResult[T] = {
      Success(f(t))
    }
  }

  sealed case class Fail(e:String) extends BusinessResult[Nothing] {
    def flatMap[T](f: Nothing => BusinessResult[T]):BusinessResult[T] = this
    def map[T](f: Nothing => T): BusinessResult[T] = this
  }

}

And define the method:

import scala.concurrent._
import scala.concurrent.ExecutionContext.Implicits.global
import validation._

def name: BusinessResult[String] = Success("my name")

def externalService(name:String):Future[String] = future(name)

def myservice:Future[Int] = {
  for {
    n <- name
    res <- externalService(n)
  } yield res match {
    case "ok" => 1
    case _ => 0
  }
}

But which is not compilable. The code in myservice can't return a Future[Int] type.

I also tried to wrap the name with Future:

def myservice:Future[Int] = {
  for {
    nn <- Future.successful(name)
    n <- nn
    res <- externalService(n)
  } yield res match {
    case "ok" => 1
    case _ => 0
  }
}

Which is also not compilable.

I know there must be a lot of issues in this code. How can I adjust them to make it compilable?

by Freewind at July 29, 2014 04:42 PM

CompsciOverflow

Variable Length Encoding of Integers Using a Modulus Algorithm

Continuing on the theme from my last question Variable Length Encoding of Integers, I have come up with a simple encoding scheme, but for which an efficient algorithm eludes me.

The constraints are simple enough: no (binary representation) number is allowed where it is divisible by 3, or a subset (prefix) of that representation is divisible by 3.

To terminate the number two bits are added so that the number is divisible by 3.

For example 1101 is allowed since neither 1101, 101, 01 nor 1 are divisible by 3.

However, 1011 is not allowed since 11 is divisible by 3.

The representation 1101 would then have 10 prepended to make it divisible by 3 (101101).

All this allows a stream of bits to be read, at each point testing to see if the number is divisible by three. If not, keep reading the next bit, until it is divisble by three. Hence allowing for a (unique) variable length encoding.

My question is about the mapping of integers on to this encoding scheme. However hard I try I can't seem to create a straightforward algorithm to do the mapping. Is there one?

by Guillermo Phillips at July 29, 2014 04:39 PM

StackOverflow

Matching classes (from reflection) by inheritance on Scala

I'm working with Scala 2.10, and I have a situation on which I have a sequence of classes which I loaded from reflection, something like this:

val names = Seq("Foo", "Bar", "Baz")
val classes = names map Class.forName

(In the real problem I have several classes loaded recursively from a directory.)

And let's say the hierarchy looks something like this:

class A
class B
class C
trait D

class Foo extends A
class Bar extends B
class Baz extends C with D

I'm pretty aware of type erasure, so I don't know which way should I take now. Since classes would be of type Seq[Class[_ <: Any]] now, what could I do to match the classes, including from subtying?

For example, I'd like to do something like this:

classes match { i =>
  case /* A */ => println("i inherits from A") // should match Foo
  case /* B */ => println("i inherits from B") // should match Bar 
  case /* D */ => println("i has D trait") // should match Baz
};

How can I achieve that?

by Paulo Torrens at July 29, 2014 04:27 PM

Sealed traits in Scala

Can I get a List of all the case objects that implement a sealed trait? For example.,

I have a sealed trait as below:

sealed trait MyTrait
case object A extends MyTrait
case object B extends MyTrait

I want to now have something that would give me the List of all the objects that implement this sealed trait?

by user3102968 at July 29, 2014 04:23 PM

CompsciOverflow

what is time complexity of program? [on hold]

Running time of program? Please explain in detail.

function(n){ for(int i=0;i

by vishwajit kumar vishnu at July 29, 2014 04:22 PM

TheoryOverflow

Theorem prover fails to find simple set theory proof?

I am trying to use an automated theorem prover (SNARK), to prove the following goal, from the following assertions (in first-order logic) -

Assertions -

  1. The relation ‘part of’ is transitive.
  2. The sum of a class is defined as follows: x is the sum of a class alpha if alpha is contained in the ‘parts’ of x, and if when y is any part of x there is always a z belonging to alpha having parts in common with the parts of y.
  3. Every class which is not null has a sum.

Proof Goal -

  1. The relation ‘part of’ is reflexive.

The fact that 4 is provable from 1-3 is claimed by Alfred Tarski in a work on mereology, although he does not give a proof. I have discovered a truly remarkable proof myself, which this question is too small to contain. (Just kidding - I’ll supply the proof if requested - it is an elementary proof by contradiction).

However SNARK is currently unable to find a proof, even with the axioms of (NBG) set theory in [1]. Any suggestions will be greatly appreciated.

[1] Automated Development of Fundamental Mathematical Theories, by Art Quaife, Kluwer Acadamic Publishers (1992)

EDIT -

Here is a nicely formatted representation of the above -

1.1.1:

1.1.2:

This was originally given as - $$S{ = }_{ Df }\hat { x } \hat { \alpha } \{ \alpha \subset \vec { { P }^{ ‘ } } x:.(y):yPx.\supset .(\exists z).z\in \alpha .\vec { { P }^{ ‘ } } y\cap \vec { { P }^{ ‘ } } z\neq \Lambda \}$$

(Peano-Russell notation)

1.1.3:

1.1.4:

by Atriya at July 29, 2014 04:21 PM

StackOverflow

Scala detect mimetype of an Array[Byte] image

I'm looking for a way in scala to detect the mimetype of an image as Array[Byte]. Are there any good libraries for this in scala?

br dan

by sonix at July 29, 2014 04:16 PM

scala macro generic field of generic class not apply class generic type parameter

Generic case class

case class GroupResult[T](
  group: String,
  reduction: Seq[T]
)

Macro method

 def foo[T] = macro fooImpl[T]

 def fooImpl[T: c.WeakTypeTag](c: Context) = {
    import c.universe._
    val tpe = weakTypeOf[T]
     tpe.declarations.collect {
      case m: MethodSymbol if m.isCaseAccessor => println(m.returnType)
    }
    c.literalUnit
  }

When I invoke foo[GroupResult[Int]]

The output is

String
Seq[T]

T is not applied ? How can I get the applied Seq[Int] ?

by jilen at July 29, 2014 04:15 PM

Multiple Values for one enum

So I am creating a parser to parse some configuration files made by our client engineers. I don't particularly trust our client engineers. They can usually spell things right but they can never remember what to capitalize. This makes Enum classes kind of noisome in that they go and break the program cause the Enum fails if they don't type in the exact right string.

Here is my Enum Class:

object EnumLayoutAlignment extends Enumeration
{
     type Alignment = Value
     val Vertical = Value("Vertical")
     val Horizontal = Value("Horizontal")
}

Is there a way I could make it so that "Vertical", "vertical", and "VERTICAL" all mapped to the same enum value?

EDIT: @Radai brought up a good point of just .upper() on the input but I would also like to include "vert" and other similar inputs

I used this as an example

by inquisitiveIdiot at July 29, 2014 04:13 PM

Lobsters

Dynamic Typing: A Local Minimum for Code Comprehension

As a relatively-new Software Engineer coming from the world of Pure Math, Code Comprehension has become the single most important metric to my personal satisfaction and productivity. These are my thoughts on this subject. I’d love to hear yours.

Comments

by aaronlevin at July 29, 2014 04:11 PM

/r/netsec

Fefe

Findet das eigentlich noch jemand ausgesprochen bizarr, ...

Findet das eigentlich noch jemand ausgesprochen bizarr, dass der Westen ernsthaft Putin "neo-imperialistische Politik" vorwirft? Darf ich zur Illustration kurz auf die Landkarte mit den aktuellen NATO-Mitgliedschaften verweisen? Zum Vergleich: Der Stand von vor dem Mauerfall.

Nur damit mal auf dem Tisch liegt, wessen Imperium sich da ausgedehnt hat.

Ich komme darauf, weil ich diesen Kommentar im "Stern" las.

July 29, 2014 04:02 PM

Die EU "beschließt Strafmaßnahmen gegen Russland". ...

Die EU "beschließt Strafmaßnahmen gegen Russland". Wer sich jetzt fragt: Warum? Haben sie Beweise für Russlands Schuld an irgendwas gesehen? Nein, haben sie natürlich nicht. Brauchen sie ja auch nicht. Es ist ja nicht so, als ob sich jetzt plötzlich der Nebel lichtet und die CDU- und SPD-Wähler plötzlich aus ihrer selbstverschuldeten Unmündigkeit ausgehen. Die wissen schlicht: Es ist egal, was sie tun. Die könnten mit einer Schrotflinte live im Fernsehen Babykatzen erschießen und würden wiedergewählt.

Was für Strafen sind das denn?

EU-Diplomaten sagten, unter anderem sollen der russische Zugang zu den EU-Finanzmärkten erschwert und Rüstungsexporte verboten werden.
Wie, Moment, Rüstungsexporte sollen verboten werden? Russland kann man ja schlecht Exporte verbieten, sollen hier also Rüstungsexporte der EU-Länder nach Russland verboten werden? Nee, oder? Darf ich bei der Gelegenheit mal auf die Liste der größten Waffenexporteure verweisen? Das wird Russland ja furchtbar treffen, wenn man ihnen keine Waffen aus der EU oder den USA mehr verkauft!1!! Was sollen die dann bloß machen! Am Ende selber Waffen herstellen?!

Mann was für eine Farce.

Am Ende bleibt von den Sanktionen übrig, dass sie für Kredite an den Finanzmärkten mehr Zinsen zahlen müssen. Das ist jedenfalls der Plan des Westens. Und wer verdient an sowas? Die Bankster! Na SO ein Zufall!1!! Wie ein Bailout, nur zahlen die Russen!

July 29, 2014 04:02 PM

DataTau

/r/netsec

Lobsters

Flight rules for git

Flight rules are a guide for astronauts (now, programmers using git) about what to do when things go wrong.

Comments

by tobym at July 29, 2014 03:40 PM

CompsciOverflow

Converting DFA to Regular Expression "equation-method(?)" [duplicate]

This question already has an answer here:

I have trouble understanding my text-book on how to convert a DFA to a Regular Expression using "equation-method"(don't no what it's called). If someone could explain step by step in detail what's going on it would be great(or maybe a few steps to get me going). (Old exam-task)

I have this DFA:

enter image description here

And the solution is:

enter image description here

I think I understand the left column. First expression in left column: $E_0 = 0E_1 + 1E_2$. $E_0$ is a regular expression that "represents the start-state $q_0$". You can choose two paths from $q_0$. On a zero you go to $q_1$ (Zero concatenated with the regular expression $E_1$). Or on a one you go to $q_2$ (one concatenated with the regular expression $E_2$). And the same principle goes for the rest of the column. How do I continue from this?

by TheEagle at July 29, 2014 03:38 PM

Describe a TM through denotation of the transition function

I'm trying to describe a TM through denotation of the transition function. Given is a TM that recognizes the language

$$ L ={\{w\#w} \mid w \in {\{0,1}\}^*\} $$ over the input alphabet: $$ \displaystyle\sum = {\{}0,1,\#\} $$

My guess is first to place a word $$ w \in {\sum}^* $$

on the tape, and in every cell a symbol one after the other:

and the rest would be denoted with squares. Something like this $$ ...w|\#|w|\square... $$ the head would be on the first Symbol from $w$

So I guess now I know that
$$ \Gamma = \{w,\#,\square\} $$

I could probably try to make a table using what I have now:

for $w$:

$q_0 = (q_0,w,\#,R)$ R would be the direction the head is going

$q_1 = (q_{yes},w,\#,N)$ N means the head doesn't move and $q_{yes}$ means that the TM accepts w

Im not sure if what I'm doing is correct. I would appreciate if someone could tell me if I'm on the right track.

by nubz0r at July 29, 2014 03:37 PM

What is a productive set of all natural numbers

I'm trying to come up with a recursive language with a non-recursive subset. Many if not all of the examples I've found describe all natural numbers as recursively enumerable and their productive set (of Gödel numbers?) as non-recursive. I'm having a hard time deciphering the generic definition associated with productive set.

by frox_io at July 29, 2014 03:30 PM

StackOverflow

Scalatest mocking a db

I am pretty new to using Scala/Scalatest and I am trying to write a few test cases that mock a db.

I have a function called FindInDB(entry : String) that checks if entry is in the db, like so:

entry match {
  case `entry` =>
  if(db.table contains entry) {
    true
  }
    false
}

FindInDB is called in another function, which is defined in a class called Service. I want to be able to mock the db.table part. From reading scalatest I know I could mock the class that FindInDB is defined and control what the function that calls FindInDB returns, but I want to test the FindInDB function itself and control what is in db.table through mock.

I hope that makes sense. Thank you

by user1077071 at July 29, 2014 03:30 PM

CompsciOverflow

Why does ε-greedy $Q$-learning not oscillate?

I have a intuitive question on the convergence of $Q$- learning. In $Q$ learning for each step a $Q$- value is learned for the state-action pair where the action is selected according to the $\epsilon$-greedy policy determined by $Q$ values.

Now my question is due to $\epsilon$-greedy policy (i.e. due to exploration) is it not possible that the $Q$ values oscillates and does not converge ? each time I am giving chance to non-greedy action his value is improved and may be higher than the value of greedy action after some steps and this becomes greedy now and the same thing continues. Is choosing a small $\epsilon$ enough to prevent this ?

by sosha at July 29, 2014 03:29 PM

QuantOverflow

CQG API solutions to execute orders, monitor positions and rebalance based on calculations pulled into a C# solution

Could someone provide me with contact details as I would like to discuss / review any C# or other solutions available / developed for CQG that can be used execute orders, monitor positions and execute new orders based on a combination of position data and signals generated from third party charting or the CQG charting software

CGQ has examples of these functions in its API example section on its web site - I am looking to engage someone to help get these to work or extend them

Thanks Robert

by Robert at July 29, 2014 03:23 PM

Loading HF stock data into excel

Are there any free, open source VBA addins or R packages that can be linked using the yahoo finance/Google finance/other data sources api to continuously download intraday data into excel or R?

https://code.google.com/p/finance-data-to-excel/

This is the closest I have found to loading D/M/Y prices into excel but it doesn't allow for intraday. I have a few charts with my own technical indicators that I would love to have have refreshed every few seconds with new data. I know that yahoo finance only provides delayed data but that's not an issue for me.

I miss having Bloomberg :(

by jessica at July 29, 2014 03:19 PM

CompsciOverflow

How to represent circles in x-y coordinates

I would like to be able to represent circles in x-y coordinates.

Each circle contains an x and y coordinates and radius in double data type.

My goal is to compare circles with each other whether they are partially or completely overlapping.

I am looking for efficient ideas. Honestly the only idea that comes to my mind is draw a line(let's say l1) from x1,y1 to x2,y2 and the length of this line is larger than addition of r1 and r2 then it does not overlap, if r1+r2 =< l1 then it overlaps, but I don't know how to find whether it is completely overlapping or partially. Also this wouldn't work for cases where I am combining more than one circle.

by Sarp Kaya at July 29, 2014 03:18 PM

StackOverflow

Gradle build errors on Team City

Since 28/07/2014 my gatling load tests are failing on Team city. Im using gradle task runner to execute the task. They work locally and have 2 different devs pull the same source code and it runs successfully on their machines.

FYI. I use gradle wrapper.

I get the following error:

Exception in thread "main" java.lang.UnsupportedClassVersionError: io/gatling/app/Gatling : Unsupported major.minor version 51.0

I think its refering to the following lines from my build.gradle file.

task runCMXTest(dependsOn: 'testTeardown', type: JavaExec) {
        classpath sourceSets.main.runtimeClasspath
        main = "io.gatling.app.Gatling"
        args = Eval.me("['-s', 'My.Simulation']")
    }

I get the dependencies from the following repository - https://oss.sonatype.org/content/repositories/snapshot

Ive check the java version on the TC box and its 1.7.55.

I understand the error is suggesting the JDK runtime version is different to the version it was compiled on but the major versions are the same. the minor versions are different on the many dev machines ive tested it on so this doesn't seem to be an issue.

Would appreciate help with this. Would recent commits have caused this issue because the TC box hasnt changed.

Regards,

by Christo at July 29, 2014 03:15 PM

Understanding foldLeft function - how are parameter values typed?

Reading the signature of foldLeft : def foldLeft[B](z: B)(f: (B, A) => B): B = { z Type does not seem to be utilized in below implementation ?

f corresponds to (List[Int](), 0) in foldLeft signature ?

object foldleftfun {
  println("Welcome to the Scala worksheet")       //> Welcome to the Scala worksheet

  val numbers = List(1, 2, 3)                     //> numbers  : List[Int] = List(1, 2, 3)

  numbers.foldLeft((List[Int](), 0)) {
    (resultingTuple, currentInteger) => {

            println(resultingTuple)
            println(currentInteger)
            println("")

        (currentInteger :: resultingTuple._1, currentInteger + resultingTuple._2)
      }
  }                                               //> (List(),0)
                                                  //| 1
                                                  //| 
                                                  //| (List(1),1)
                                                  //| 2
                                                  //| 
                                                  //| (List(2, 1),3)
                                                  //| 3
                                                  //| 
                                                  //| res0: (List[Int], Int) = (List(3, 2, 1),6)

}

by blue-sky at July 29, 2014 03:14 PM

QuantOverflow

How to build an execution trading system with CQG API?

I am currently using CQG for spread trading and have a spread trading strategy in CQG chart. I am trying to automate my spread trading strategy in CQG, but CQG told me to look at CQG API samples to build my own system or get third party software.

The CQG trade system doesn't allow you to automate your strategy in CQG IC. So, I need automated trading software to execute my strategy. CQG API should allow you to build your own execution system. Could you tell me which example in CQG API sample helps to start building a spread trading system? For example, buy A instrument and sell B instrument when the spread price goes below lower bollinger band, and put OCO limit order for stop loss and target exit. Would it be hard to make that kind of system using CQG's API?

by user948950 at July 29, 2014 03:13 PM

What is the distribution of stock splits?

I want to know how rare are splits more extreme than, say, 7:1 (and reverse splits similarly). An answer here points to announcements on Yahoo Finance, but apparently only monthly views.

What is a better database of historical splits, or simply the conventional wisdom about common and uncommon splits?

(The universe of assets I am thinking of is anything that could get an ISIN, but I think it usually affects stocks only.)

by László at July 29, 2014 03:13 PM

StackOverflow

Why is this implicit binding not being picked up by the Scala compiler?

I've set up an implicit binding below, with an expectation that the TweetProvider trait will be bound to FancyRestaraunt (i.e. a toy app for simulating a twitter feed, where people generate tweets after they eat at a fancy restaraunt).

However, it seems that the implicit binding which I created isn't getting utilized.

Any thoughts on how this implementation is incorrect ?

import scala.Product
import scala.io.BytePickle.PU
import scala.tools.cmd.ParserUtil

trait TweetProvider[Prod] {
  def getTweet(a:Prod): String;
}

class FancyRestaurant {
  def name(){
    "Fancy Restaurant"
  }
}

class RestaurantTweetosphere {

  def getTweet[Prod]
    (prod : Prod)
    (implicit product : TweetProvider[Prod])
        = product.getTweet(prod);

  implicit val FancyRestaurantTweet = new TweetProvider[FancyRestaurant]{
    def getTweet(r : FancyRestaurant) = {
      "OMG I love this "+r.name();
    }
  }

  def run() = {
    val c = new FancyRestaurant;
    c.getTweet() // <-- doesnt compile

  }

}

by jayunit100 at July 29, 2014 03:11 PM

TheoryOverflow

Are there any cases where quantum has given insight for classical algorithms?

To be more specific, has it ever happened that we've made some kind of significant improvement to a classical algorithm or problem as a result of some "trick" or insight gained from looking at quantum algorithms?

by hadsed at July 29, 2014 03:10 PM

StackOverflow

How to debug clojure file?

No breakpoint can be set on line 5, which contains [x].

Intellija won't let me do so. I used different plugin, such as "La Clojure" and "Cursive". Both stop at line 3 rather than line 5.

So, how people step into the code in clojure ?

Is there any syntax suggestion or maybe tool to help with ?

(defn flattenlist
  ([x & more]
    (concat (if (vector? x)
              (apply flattenlist x)
              [x]
            )
            (if (= more nil)
              nil
              (apply flattenlist more))))
  )
(flattenlist [[1 [[2]]] 3 [4 5] 6])

by CodeFarmer at July 29, 2014 03:03 PM

Fefe

Die USA schaffen demnächst wohl wieder "Flugpreiseangaben ...

Die USA schaffen demnächst wohl wieder "Flugpreiseangaben nur inklusive aller Steuern und Gebühren" wieder ab. Aus Gründen, die sicher nichts damit zu tun haben, dass die Fluglinien dem Abgeordneten massiv "gespendet" haben, der das eingebracht hat.

July 29, 2014 03:02 PM

Nur mal so als Realitätsabgleich, wie Putin und seine ...

Nur mal so als Realitätsabgleich, wie Putin und seine Leute die Situation gerade einschätzen (letzter Absatz):
One person close to Mr Putin said the Yukos ruling was insignificant in light of the bigger geopolitical stand-off over Ukraine. “There is a war coming in Europe,” he said. “Do you really think this matters?”

July 29, 2014 03:02 PM

Die Russen testen dann mal eine neue Cruise Missile. ...

Die Russen testen dann mal eine neue Cruise Missile. Aus Gründen, die sicher nichts mit der Ukraine oder der 50-Milliarden-Strafe zu tun haben.

July 29, 2014 03:02 PM

/r/clojure

TheoryOverflow

Complexity of solving linear equations

What is known about the complexity of solving a system of linear equations over some finite field? I know that there exists an $O(n^3)$ algorithm (Gauss) that computes a solution and that for sparse systems there are even better algorithms. However, I was wondering if there was some comlexity-theoretic characterization of this problem. For example, is the corresponding decision problem in $\mathbf{NC}$? Is it complete for any complexity class?

by Alan Sz at July 29, 2014 03:00 PM

StackOverflow

Advantages of Scala emitting bytecode for the JVM 1.7

As per Scala 2.10, what are the advantages (if any) of emitting bytecode for the JVM 1.7, when compared to the default of emitting for the 1.6?

by Hugo S Ferreira at July 29, 2014 02:56 PM

/r/netsec

StackOverflow

Linux and unicode

I know (or I think I know) about char encoding unicode as much as is in this article: http://www.joelonsoftware.com/articles/Unicode.html .

Linux ❤ Unicode

I saved a japanese character in a file, and opening it in multiple ways gives me multiple results.

Counterclockwise (roughly)

  1. 'cat' inside yakuake shows me the right results.
  2. vim inside yakuake doesn't show it right!
  3. gvim opened from yakuake shows it wrong too. (Bigger gvim in center of screen)
  4. gvim opened from Alt-F2 shows it right, bottom gvim.
  5. Intellij opening it directly shows it right. (not in image)
  6. Reading using scala in Intellij shows it wrong. scala.io.Source.fromFile( , "UTF-8" ).mkString

Could someone tell me please what's up here? Specially the vim inconsistency? I can bear Linux(X) and Intellij behaving arbitrarily, but vim doing that tells me that it's my understanding that's faulty.

EDIT: To answer @user3666209's question, all the vim/gvim's have 'empty' file encoding.

by user247077 at July 29, 2014 02:53 PM

CompsciOverflow

What is the term for this set

I have a set of related data/objects for which, when undergoing some algorithm, there should be only one valid match. Is there a unique term for this type of set?

A common practical use case would be a list displayed for user selection, or a list of keys that should have a single corresponding entry in a database table.

To illustrate this, here are a few examples:

  • [True, False] (boolean)
  • [Forward, Neutral, Reverse] (state machine status)
  • [Rare, Medium, Well] (user preference)
  • [Credit, Debit] (reference type)

In each of these cases, only one element in the set will be contextually valid, where the context may be a database record, an instance of an object in memory, or a question on a survey.

Examples of sets that wouldn't meet this criteria:

  • [Garage, Kitchen, Bedroom, Bathroom] (rooms in a home)
  • [Email, Phone, Text] (method of communication)
  • [All Countries in Europe] (places I visited)

In these cases, the set is used as a bucket from which a selection may be made - but the selection is not necessarily expected to be a single item in the set.

by PinnyM at July 29, 2014 02:49 PM

AWS

New Amazon Climate Research Grants

Many of my colleagues are focused on projects that lead to a reduction in the environmental impact of our work. Online shopping itself is inherently more environmentally friendly than traditional retailing. Other important initiatives include Frustration-Free Packaging, Environmentally Friendly Packaging, our global Kaizen program, Sustainable Building Design, and a selection of AmazonGreen products. On the AWS side, the US West (Oregon) and AWS GovCloud (US) Regions make use of 100% carbon-free power.

In conjunction with our friends at NASA, we announced the OpenNEX (NASA Earth Exchange) program and the OpenNEX Challenge late last year. OpenNEX is a collection of data sets produced by Earth science satellites (over 32 TB at last count) and a set of virtual labs, lectures, and Amazon Machine Images (AMIs) for those interested in learning about the data and how to process it on AWS . For example, you can learn how to use Python, R or shell scripts to interact with the OpenNEX data, generate a true-color Landsat image, enhance Landsat images with atmospheric corrections, or work with the NEX Downscaled Climate projections (NEXDCP-30).

Amazon Climate Research Grants
We are interested in exploring ways to use computational analysis to drive innovative research in to climate change. In order to help to drive this work forward, we are now calling for proposals for Amazon Climate Research Grants. In early September, we will award grants of free access to supercomputing resources running on the Amazon Elastic Compute Cloud .

The grants will provide access to more than fifty million core hours via the EC2 Spot Market. Our goal is to encourage and accelerate research that will result in an improved understanding of the scope and effects of climate change, along with analyses that could suggest potential mitigating actions. Recipients have the opportunity to deliver an update on their progress and to reveal early findings at the AWS re:Invent conference in mid-November.

Timelines
If you are interested in applying for an Amazon Climate Research Grant, here are some dates to keep in mind:

  • July 29, 2014 - Call for proposals opens.
  • August 29, 2014 - Submissions are due.
  • Early September 2014 - Recipients notified; AWS grants issued.
  • November 2014 - Recipients present initial research and findings at AWS re:Invent .

To learn more or to submit a proposal, please visit the Amazon Climate Change Grants page.

AWS for HPC
Let's wrap up this post with a quick look at an AWS HPC success story!

The Globus team at the University of Chicago/Argonne National Lab used an AWS Research grant to create the Galaxy instance and use EC2 Spot instances to run various climate impact models and applications that project irrigation water availability and agricultural production under climate change. You can learn more about this "Science as a Service on AWS" project by taking a peek at the following presentation:

Your Turn
I am looking forward to taking a look at the proposals and to seeing the first results at re:Invent. If you have an interesting and relevant project in mind, I invite you to apply now!.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at July 29, 2014 02:39 PM

/r/emacs

Fefe

Palästinensische Hacker sollen angeblich die Pläne ...

Palästinensische Hacker sollen angeblich die Pläne für das israelische Raketenabwehrsystem "Iron Dome" geklaut haben, und zwar schon 2011-2012.

Ich weiß ja nicht, die Meldung ergibt für mich nicht viel Sinn. Und dann? Ist ja nicht so, als ob die Hamas die Technologie für Ausweichsysteme oder elektronische Störsender hätten. Die haben ja noch nicht mal Technologie zum Zielen ihrer Raketen. Aber für eine Hacker-Abteilung haben sie Geld und Knowhow?!

Überhaupt: Es gibt im Gaza-Streifen Internet und mit dem kommt man auch nur in die Nähe von israelischen Rüstungsgeschichten!?

Ich weiß nicht, das passt alles irgendwie nicht. Finde ich.

July 29, 2014 02:02 PM

DataTau

/r/netsec

TheoryOverflow

Properties of Star shaped polygon

The problem i am currently stuck with is the following:

Let $ab$ and $cd$ are constructed edges of the visibility polygon $P$ such that
$1.$ $a$ and $d$ are reflex vertices of $P$,
$2.$ $b$ and $c$ are some points on the edges of P and
$3.$ the counter-clockwise boundary of $a$ to $d$ passes through $b$ and $c$.
Prove that if there exists another such pair of constructed edges in $V(q)$, then $P$ is not a star-shaped polygon.

So far i have been able to prove that: If $uv$ be an edge of a star-shaped polygon $P$, where $v$ is a reflex vertex. Extend $uv$ from $v$ till it meets a point $u'$ on the boundary of $P$, then all points of the kernel of $P$ lies on the same side of $vu'$.

Am i on the right track? Any link/reference/hint will be helpful.

by Dibyayan at July 29, 2014 01:55 PM

CompsciOverflow

Asymptotic behaviour of sum of consecutive powers (bivariate)

Are there some (bivariate) closed form formulas for the asymptotic behaviour of the sum:

$\sum_{k=1}^{n} k^d$

where $n$ and $d$ are large integers? I am especially interested in a lower bound of the form $\Omega(f(n,d))$. I am aware there is an exact formula as a polynomial involving the Bernouilli numbers, and the asymptotic behavior has been discussed in some other question when $d$ is fixed, but here I consider both $n$ and $d$ as unbounded.

by Joseph Stack at July 29, 2014 01:55 PM

StackOverflow

Best way to merge two maps and sum the values of same key?

val map1 = Map(1 -> 9 , 2 -> 20)
val map2 = Map(1 -> 100, 3 -> 300)

I want to merge them, and sum the values of same keys. So the result will be:

Map(2->20, 1->109, 3->300)

Now I have 2 solutions:

val list = map1.toList ++ map2.toList
val merged = list.groupBy ( _._1) .map { case (k,v) => k -> v.map(_._2).sum }

and

val merged = (map1 /: map2) { case (map, (k,v)) =>
    map + ( k -> (v + map.getOrElse(k, 0)) )
}

But I want to know if there are any better solutions.

by Freewind at July 29, 2014 01:44 PM

Scala: Merge map

How can I merge maps like below:

Map1 = Map(1 -> Class1(1), 2 -> Class1(2))
Map2 = Map(2 -> Class2(1), 3 -> Class2(2))

After merged.

Merged = Map( 1 -> List(Class1(1)), 2 -> List(Class1(2), Class2(1)), 3 -> Class2(2))

Can be List, Set or any other collection who has size attribute.

by Robinho at July 29, 2014 01:44 PM

Scala: how to merge a collection of Maps

I have a List of Map[String, Double], and I'd like to merge their contents into a single Map[String, Double]. How should I do this in an idiomatic way? I imagine that I should be able to do this with a fold. Something like:

val newMap = Map[String, Double]() /: listOfMaps { (accumulator, m) => ... }

Furthermore, I'd like to handle key collisions in a generic way. That is, if I add a key to the map that already exists, I should be able to specify a function that returns a Double (in this case) and takes the existing value for that key, plus the value I'm trying to add. If the key does not yet exist in the map, then just add it and its value unaltered.

In my specific case I'd like to build a single Map[String, Double] such that if the map already contains a key, then the Double will be added to the existing map value.

I'm working with mutable maps in my specific code, but I'm interested in more generic solutions, if possible.

by Jeff at July 29, 2014 01:44 PM

Concatenate two immutable maps - which elements are preferred?

When concatenating two immutable maps, it seems that the elements of the right operand will "overwrite" the elements of the left one:

scala> List((1, 2), (5, 6)).toMap ++ List((5, 9)).toMap
res13: scala.collection.immutable.Map[Int,Int] = Map(1 -> 2, 5 -> 9)

scala> List((5, 9)).toMap ++ List((1, 2), (5, 6)).toMap
res14: scala.collection.immutable.Map[Int,Int] = Map(5 -> 6, 1 -> 2)

I would like to know, if this is a rule in Scala ?

From the Scala API I could not figure out this question.

by John Threepwood at July 29, 2014 01:44 PM

TheoryOverflow

NP-hardness of tasks graph assignment to two heterogenous servers

I have a problem with determinig if the following assignment problem is NP-hard. Any comments and suggestions would be appreciated.

Problem definition

Given is a directed acyclic graph $G=(V,E)$ representing a set of computation tasks which need to be performed. Each vertex $v$ represents a single, non-divisible computation task. Each edge $e$ denotes communication between two tasks, i.e. transmission of output data from one task to another.

enter image description here

Each computation task can be executed on one of two servers: $S_A$ or $S_B$. The servers have differing architectures, so $S_A$ is able to perform some computations more efficiently than $S_B$, and vice versa. Therefore, each computation task has two values associated with it: $c_A(v)$ and $c_B(v)$, which denote the cost of executing the task on server $S_A$ and $S_B$, respectively.

Consequently, each edge $e$ has two associated values representing the data transmission cost: $t_{internal}(e)$ (both tasks executed on the same server) and $t_{external}(e)$ (the task are executed on different servers). We assume $t_{internal}(e) = 0$ (i.e. internal server communication has no significant cost). $t_{external}(e) > 0$ and is proportional to the amount of data which needs to be transmitted between the tasks. The transmission cost is symmetrical, i.e. it does not matter if we transmit the data from $S_A$ to $S_B$ or from $S_B$ to $S_A$.

enter image description here

The total cost of computation $C$ is the sum of costs of individual task executions and the costs of communication between tasks, i.e.:

$C = \sum\limits_{v \in V}c(v) + \sum\limits_{e \in E}t(e) $

Question

Is determining the lowest-cost assignment of tasks $v$ to servers ($S_A$ and $S_B$) NP-hard?

Attempted approach

I have tried transforming the graph $G$ to a new graph $G^Z$ with doubled vertices, as shown in the picture below.

enter image description here

In the new graph, the problem is to choose one vertex from each pair ($v^A$, $v^B$) so that the total weight of edges remaining in the resulting graph would be minimal.

I have tried to find possible analogies to well-known graph theory problems (e.g. maximum independent set, vertex cover, graph partitioning, etc.). Alas, to no avail.

by marszall87 at July 29, 2014 01:33 PM

/r/netsec

StackOverflow

What does the get method do in scala?

post("/api/v1/multi_preview/create"){
    val html = getParam("html").get
    val subject = getParam("subject").get
 }

I want to know what exactly the .get method does in scala. getParam() is already returning the parameters to the post hit . I know that .get will make it easier as we dont have to "match" to check for null values as it will automotically thrown an exception in the former case. Is there more to it than meets the eye?

by user3851565 at July 29, 2014 01:22 PM

Idiomatic no-op/"pass"

What's the (most) idiomatic Clojure representation of no-op? I.e.,

(def r (ref {}))
...
(let [der @r]
    (match [(:a der) (:b der)]
        [nil nil] (do (fill-in-a) (fill-in-b))
        [_ nil] (fill-in-b)
        [nil _] (fill-in-a)
        [_ _] ????))

Python has pass. What should I be using in Clojure?

ETA: I ask mostly because I've run into places (cond, e.g.) where not supplying anything causes an error. I realize that "most" of the time, an equivalent of pass isn't needed, but when it is, I'd like to know what's the most Clojuric.

by swizzard at July 29, 2014 01:18 PM

TheoryOverflow

Visible point in simple polygon

A simple polygon $P$ is given.
Prove the the point $q$ is internal point of the simple polygon $P$ if and only if each vertex $v$ of $P$ is visible from the point $q$.

The first step is pretty much trivial and can be shown by the definition. Tho the other step is a bit more difficult (to me).
Will it be a good way to prove it by contradiction? I am thinking of assuming that there is exist such a vertex $v'$ that is'nt visible from the point $q$... and then I can only guess that it will lead to the fact that $v'$ is external to the polygon $P$ or maybe that it is internal.
Am I on the right way? If so, can someone please explain to me how to show such a thing?

Thanks!

by Buzi at July 29, 2014 01:15 PM

/r/netsec

StackOverflow

In Haskell, are guards or matchers preferable?

I'm learning Haskell, and it's not always clear to me when to use a matcher and when to use a guard. For certain scenarios it seems that matchers and guards can be used to achieve essentially the same ends. Are there some rules or heuristics for when it's better to use matches over guards or vice versa? Is one more performant than the other?

To illustrate what I'm getting at, here are a couple of silly examples I cooked up that seem to be equivalent, but one version uses matchers and the other uses guards:

listcheck :: [a] -> String
listcheck [] = "List is null :-("
listcheck a = "List is NOT null!!"

listcheck' a
    | null a = "List is null :-("
    | otherwise = "List is NOT null!!"

and

luckyseven :: Int -> String
luckyseven 7 = "SO LUCKY!"
luckyseven b = "Not so lucky :-/"

luckyseven' c
    | c == 7 = "SO LUCKY!"
luckyseven' c = "Not so lucky :-/"

Thanks!

by Kurtis at July 29, 2014 01:13 PM

Debugging expression evaluation

I'm using IntelliJ idea Community edition (with Scala) and I'm trying to evaluate an expression. I hit Alt-F8 to open it in debug mode and then switch to 'Code Fragment Mode'. However, I'm allowed to only evaluate variables that already exist in memory, and am not allowed to declare new. When I do so, I get- 'Evaluation of variables is not supported'. Is there a plugin that I can use in debug mode to evaluate arbitrary code?

by user247077 at July 29, 2014 01:12 PM

/r/netsec

UnixOverflow

How to install PostgreSQL 9.3 in FreeBSD jail?

I configured virtual NICS using pf, and a jail for FreeBSD using qjail create pgsql-jail 192.168.0.3.

When I tried to install PostgreSQL 9.3 using port collection, it shows strange message at first.

pgsql-jail /usr/ports/databases/postgresql93-server >make install
===> Building/installing dialog4ports as it is required for the config dialog
===>  Cleaning for dialog4ports-0.1.5_1
===> Skipping 'config' as NO_DIALOG is defined
====> You must select one and only one option from the KRB5 single
*** [check-config] Error code 1

Stop in /basejail/usr/ports/ports-mgmt/dialog4ports.
*** [install] Error code 1

Stop in /basejail/usr/ports/ports-mgmt/dialog4ports.
===> Options unchanged
=> postgresql-9.3.0.tar.bz2 doesn't seem to exist in /var/ports/distfiles/postgresql.
=> Attempting to fetch ftp://ftp.se.postgresql.org/pub/databases/relational/postgresql/source/v9.3.0/postgresql-9.3.0.tar.bz2
postgresql-9.3.0.tar.bz2                        1% of   16 MB   71 kBps

Anyway, installation continues, so I waited. I chose all default options for all option dialogs. And at the end of the process, I saw it finally failed with this message.

====> Compressing man pages
===>  Building package for pkgconf-0.9.3
Creating package /basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz
Registering depends:.
Registering conflicts: pkg-config-*.
Creating bzip'd tar ball in '/basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz'
tar: Failed to open '/basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz'
pkg_create: make_dist: tar command failed with code 256
*** [do-package] Error code 1

Stop in /basejail/usr/ports/devel/pkgconf.
*** [build-depends] Error code 1

Stop in /basejail/usr/ports/textproc/libxml2.
*** [install] Error code 1

Stop in /basejail/usr/ports/textproc/libxml2.
*** [lib-depends] Error code 1

Stop in /basejail/usr/ports/databases/postgresql93-server.
*** [install] Error code 1

Stop in /basejail/usr/ports/databases/postgresql93-server.

I have no idea why this fails. Errors at beginning seems I have something wrong with dialog4ports. And errors at last seems installer cannot write to ports file tree. AFAIK, the ports files are read-only shared from host system.

What's wrong with my jail? How can install PostgreSQL 9.3 in my jail?

by Eonil at July 29, 2014 12:55 PM

StackOverflow

How can I get a byte that represents an unsigned int in Java?

I have integers from 0 to 255, and I need to pass them along to an OutputStream encoded as unsigned bytes. I've tried to convert using a mask like so, but if i=1, the other end of my stream (a serial device expecting uint8_t) thinks I've sent an unsigned integer = 6.

OutputStream out;
public void writeToStream(int i) throws Exception {
    out.write(((byte)(i & 0xff)));
}

I'm talking to an Arduino at /dev/ttyUSB0 using Ubuntu if this makes things any more or less interesting.

Here's the Arduino code:

uint8_t nextByte() {
    while(1) {
    if(Serial.available() > 0) {
        uint8_t b =  Serial.read();
      return b;
     }
    }
}

I also have some Python code that works great with the Arduino code, and the Arduino happily receives the correct integer if I use this code in Python:

class writerThread(threading.Thread): 
    def __init__(self, threadID, name):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.name = name
    def run(self):
        while True:
            input = raw_input("[W}Give Me Input!")
            if (input == "exit"):
               exit("Goodbye");
            print ("[W]You input %s\n" % input.strip())
            fval = [ int(input.strip()) ]
            ser.write("".join([chr(x) for x in fval]))

I'd also eventually like to do this in Scala, but I'm falling back to Java to avoid the complexity while I solve this issue.

by Martin at July 29, 2014 12:54 PM

/r/netsec

StackOverflow

Why is this an invalid use of Scala's abstract types?

I have this code:

class A extends Testable { type Self <: A }

class B extends A { type Self <: B }

trait Testable {
    type Self
    def test[T <: Self] = {}
}

object Main {
    val h = new A
    // this throws an error
    h.test[B]
}

And my error is:

error: type arguments [B] do not conform to method test's type parameter bounds [T <: Main.h.Self]
    h.test[B]

On this question, it was said that this was due to path dependent types. Can anyone figure out how to have T <: Self, without having the path-dependent types problem?

Any help would be appreciated.

by mdenton8 at July 29, 2014 12:51 PM

TheoryOverflow

Solving Recurrences

This is the recursion available \ \begin{equation} R_{n}=\frac{1}{n} \{C_1* R_{n-1} +C_2 * R_{n-2}\}, R_0 = A_0 ,R_1 = A_1 \end{equation}

Given conditions

  1. $A_0,A_1,C_1,C_2 $ are constant matrices. $A_1$ and $A_0$ are initial values

  2. $A_0,A_1,C_1,C_2,R_n $ have dimension $3\times 3$

  3. $C_1,C_2,C_1+C_2 $ etc are skew symmetric matrices and are not commutative. $C_1$ and $C_2$ are skew symmetric matrices with diagonals zero

  4. $R_n$ are converging series. Means last terms will be approaching to zero or very very small values

  5. Determinant of $C_1*R_{n-1}$ and $C_2*R_{n-2}$ both are zero {Logic : $det(C_1R_{n-1})=det(C_1)det(R_{n-1}),=0*det(R_{n-1}),=0$ }.

  6. SUM= $ \sum_{n=0}^{n= \infty} R_n \ne 0 $

    Let $R(x) = \sum_{n=0}^\infty R_nx^n$,$SUM=R(1),R(1)$ is invertible because,it is a rotation matrix with determinant 1. Remember we still have not proved R(x) is invertible what we only know is R(1) is invertible from given conditions.

    Question

  7. What is the summation of the terms $R_n$?. Means I need to find $SUM = \sum_{n=0}^{n= \infty} R_n$.?

  8. What is the expression for $R_n$ after solving the recursion?

by Rejo_Slash at July 29, 2014 12:50 PM

StackOverflow

Port scanning in Scala: Application hangs for a closed port on the remote host

I've been trying to create an application which needs to scan open ports on a network (mostly LAN) as fast as possible.

I searched around and one great method that I found uses the following code:

(1 to 65536).par.map { case port ⇒
  try {
    val socket = new java.net.Socket("127.0.0.1", port)
    socket.close()
    println(port)
    port
  } catch {
    case _: Throwable ⇒ -1
  }
}.toSet

However, the problem with the code is that if I enter anything other than 127.0.0.1 or localhost as location (say 192.168.1.2), the application freezes.

Any idea why this happens and how I can fix it?

P.S. I also tried setting socket timeout with socket.setSoTimeout(1500), but no change.

by Chetan Bhasin at July 29, 2014 12:45 PM

Lobsters

StackOverflow

Clojure: function composition at runtime

Problem:

Suppose I have a set of functions f_1 ... f_n that I want to compose at runtime, such that I get for example:

(f_a (f_b (f_c) (f_d)) (f_e))

Therefore I need the types of the parameters and the return value of each function in order to know, which functions I can plug into each other.

First Attempt: Annotate each function

(defn foo [n f]
  ^{:params [Number clojure.lang.Fn]
    :return String}
  (do stuff with f and n, return a string))

I don't like this approach because of obvious reasons, such as if I wanted to use clojure.core as the set of functions I would have to annotate every function, which wouldn't be very desirable.

Questions

  1. How would you attempt to solve this problem?

  2. Could core.typed help me with that?

by MLmuchAmaze at July 29, 2014 12:31 PM

Planet Theory

TR14-096 | Solving Linear Equations Parameterized by Hamming Weight | Vikraman Arvind, Johannes Köbler, Sebastian Kuhnert, Jacobo Toran

Given a system of linear equations $Ax=b$ over the binary field $\mathbb{F}_2$ and an integer $t\ge 1$, we study the following three algorithmic problems: 1. Does $Ax=b$ have a solution of weight at most $t$? 2. Does $Ax=b$ have a solution of weight exactly $t$? 3. Does $Ax=b$ have a solution of weight at least $t$? We investigate the parameterized complexity of these problems with $t$ as parameter. A special aspect of our study is to show how the maximum multiplicity $k$ of variable occurrences in $Ax=b$ influences the complexity of the problem. We show a sharp dichotomy: for each $k\ge 3$ the first two problems are W[1]-hard (which strengthens and simplifies a result of Downey et al. [SIAM J. Comput. 29, 1999]). For $k=2$, the problems turn out to be intimately connected to well-studied matching problems and can be efficiently solved using matching algorithms.

July 29, 2014 12:30 PM

TheoryOverflow

Tree Traversal - Simple Puzzle type Issue

This is a puzzle like question,based on Fibonacci like structure of the tree. Actually it is a short question with out any complex concepts. It appears bit big,since I have added explanations with example to avoid confusion. Compared to Fibonacci tree structure,our tree has node zero also added, that is the only minor difference. Basically aim is to utilize the Fibonacci Tree properties. I have been trying this more than one week. For understanding, I have given the example of a Fibonacci like tree pic with $n=5 $ for our case. If you check the node index, you can identity the Fibonacci indexing enter image description here

Given data about the Tree for the Puzzle Question

  1. Each node has an Index as indicated in the figure. Leaf Index is $0$
  2. Each node is colored such that left child from each is colored Red and Right child is colored Green
  3. Each node, except tree root, has a Scalar Weight Value associated with it, let us call it as SWV(n). It is calculated as $\frac{1}{P(n)}$. Where $P(n)$ will give the index number of the parent node of current node n. For example $SWV(4)=\frac{1}{5}$
  4. Each node, except tree root, has a Matrix Weight Value attached with it. Let us call it $MWV_1,MWV_2$,where $MWV_1(n)=SWV(n)*S_1$ and $MWV_2(n)=SWV(n)*S_2$. Values of $S_1$ and $S_2$ are simple and is given below. If the node is Red, it is given by $MWV_1(n)$ and if it is green, it is given by $MWV_2(n)$. Example, node 4 in the pic has $MWV_1(4)=\frac{1}{5}S_1$ and node 2 has $MWV_2(2)=\frac{1}{4}S_2$(child of node 4)

    $$S_1=\left( \begin{array}{ccc} 0 & -c_0 & b_0 \\ c_0 & 0 & -a_0 \\ -b_0 & a_0 & 0 \\ \end{array} \right).$$ $$S_2=\left( \begin{array}{ccc} 0 & -(c_1-c_0) & (b_1-b_0) \\ (c_1-c_0) & 0 & -(a_1-a_0) \\ -(b_1-b_0) & (a_1-a_0) & 0 \\ \end{array} \right).$$

    NB: All entries of the matrices $S_1$, $S_2$ are constants,can't be altered

    Some properties of $S_1$ and $S_2$ extracted from Given conditions as hints

    a) $S_2$ can be written in-terms of of $S_1$ and a constant matrix made of $a_1,b_1,c_1$

    b) Both are skew symmetric matrices with diagonals $0$

    c) $p = \sqrt{a_0^2+b_0^2+c_0^2}$, $S_1 ^3 = -(a_0^2+b_0^2+c_0^2)S_1 = -p^2 S_1$. Hence, $S_1^{2m+1} = (-1)^mp^{2m}S_1$ and $S^{2m} = (-1)^{m-1}p^{2m-2}S_1^2$ (courtesy @JimmyK4542 Link!). Same holds on elements of $S_2$ also, as both have same structure. This self reducing property on power is actually a good hint. It can shorten our methods

  5. Path Score of a leaf node = Product of each Matrix Weight Value from leaf to the immediate child of the root of the tree(as root has no such value attached). Example Path Score of a leaf node, shown near to the arrow in the pic is given as

    Path Score of a leaf node(near to the arrow in pic)$=MWV_1(0)*MWV_1(1)*MWV_2(2)*MWV_1(4)$ . Remember multiplication order is very important as it is matrices

Question

  1. What is the finte expression of the sum of all path-scores from all leaves(only) in-terms of n,for a given node index n.?

    2.NB :: You can even suggest separately for even and odd cases of n

    3.NB :: Even a partial solution is welcome,I can join in discussion

    4.NB :: We can find some similarity of the tree structure in huff man coding tree(in coloring) and Fibonacci tree. It may give more hints..

Thanks you for taking time to read it..

by Rejo_Slash at July 29, 2014 12:10 PM

StackOverflow

Maintaining Elasticsearch and its settings and index mapping in a production

I am building up a site and most of its functionality is backed by Elasticsearch. At the moment I am looking for a solution how to update Elasticsearch mappings.

http://www.elasticsearch.org/blog/changing-mapping-with-zero-downtime/

One way could be to use set of requests in form of CURL commands and store them in files like SQL schema updates.

On the other hand - as the app is Scala/Java based, I could also do mapping check at startup and update index mappings there via Java API.

What is the best practice here?

by Petteri Hietavirta at July 29, 2014 11:57 AM

error: not found: type SparkConf

I installed spark. pre-compiled and standalone. But both are unable to run val conf = new SparkConf(). The error is error: not found: type SparkConf:

scala> val conf = new SparkConf()
<console>:10: error: not found: type SparkConf

The pre-compiled is spark 0.9.1 and Scala 2.10.3 The standalone is Spark 1.0.1 and Scala 2.10.4 For the standalone, i compiled it with scala 2.10.4 Your help will be much appreciated

by del at July 29, 2014 11:45 AM

Lobsters

/r/clojure

QuantOverflow

What is the best solution to use QuantLib within Excel?

Excel is likely the most widespread instrument across all not-only-quants desks; in addition, we have to keep in mind that Bloomberg and Reuters allow to easily import real time data in Excel, and this is very handy.

Due to these features, I'm wondering what's the easiest and most reliable solution to use QuantLib in Excel.

So far, these are the ways I know:

Each one has its pros and con's, I must admit QuantLibXL is harder than I thought before; being quite able to code with R, so far my favorite solution is the second one.

If anyone knows any better solution and/or a good step-by-step tutorial for QuantLibXL (something which explains how to deal easily with "classes" in a spreadsheet), it would be really appreciated if he could write it here.

by Lisa Ann at July 29, 2014 11:37 AM

TheoryOverflow

Is there simple reduction Dominating Set to Vertex Cover?

Is there simple reduction Dominating Set to Vertex Cover?

In the other direction the reduction is simple.

Searching the web returned blog.

It warns This is not finished yet and experiments suggest the reduction doesn't work.


Added by "simple" I mean graph transformation $G \to G'$, s.t. $G$ has DS of size $k$ iff $G'$ has VC of size $f(G,k)$ and the transformation does not depend on $k$ (to avoid reduction to SAT).

by joro at July 29, 2014 11:36 AM

StackOverflow

possibility of scala-virtualized in Eclipse with Scala >=2.11

I followed this guide: http://lamplmscore.epfl.ch/mediawiki/index.php/Eclipse_IDE_with_Scala-virtualized

in order to use the virtualized (-Yvirtualize ) plugin inside the eclipse compiler. This works for a nightly build of Scala 2.10 which is old and lacks features like implicit classes. Does anyone know of a way to work with newer versions of Scala with the virtualize plugin AND eclipse?

by Felix at July 29, 2014 11:34 AM

Java Play: bindFromRequest() not working

I'm experimenting with Java Play and I've hit an immediate roadblock. The situation is quite straightforward and the setup, simple.

I have a model class called Person that is very simple and looks like this;

package models.models;

import play.db.ebean.Model;

import javax.persistence.Entity;
import javax.persistence.Id;

/**
 * Created by asheshambasta on 25/07/14.
 */
@Entity
public class Person extends Model {

    @Id
    public Integer id;

    public String name;
}

And I have a route defined as;

POST    /person                     controllers.Application.addPerson()

Next, I have an action addPerson inside controllers.Application, which is

public static Result addPerson() {
    Person person = form(Person.class).bindFromRequest().get();
    person.save();
    Logger.debug(person.toString());
    Logger.debug(form().get("name"));
    return redirect(controllers.routes.Application.index());
}

And the index.scala.html looks like:

@(message: String)

@main("Welcome to Play") {
        <form action="@routes.Application.addPerson()" method="post">
            <input type="text" name="name" />
            <button>Add person</button>
        </form>
}

I've also checked my browser debug tools and I see the name form element correctly being posted to the server.

What happens is weird: none of the form parameters seem to be visible in the action. As you can see, I have two Logger.debug's, each of which show me a null for the name property in the person object, as well as when attempted to be retrieved using form.get("name").

I've tried what I could see is the best way to go about this problem but I've not really seen much about this issue online. And this seems too basic to be an issue with the Play framework.

What am I doing wrong here?

As a note, I'm on a Mac, and I'm using mySQL to store data.

by Ashesh at July 29, 2014 11:31 AM

scala: how can I chain partial functions of different types

I have a pattern to process web service requests using chained partial functions (this is a chain of responsibility pattern, I think?). In my example, let's say there are two parameters for the request, a string Id and a date. There's a verification step involving the id, a verification step checking the date, and finally some business logic that use both. So I have them implemented like so:

object Controller {
  val OK = 200
  val BAD_REQUEST = 400

  type ResponseGenerator = PartialFunction[(String, DateTime), (String, Int)]

  val errorIfInvalidId:ResponseGenerator = {
    case (id, _) if (id == "invalid") => ("Error, Invalid ID!", BAD_REQUEST)
  }

  val errorIfFutureDate:ResponseGenerator = {
    case (_, date) if (date.isAfter(DateTime.now)) => ("Error, date in future!", BAD_REQUEST)
  }

  val businessLogic:ResponseGenerator = {
    case (id, date) => {
      // ... do stuff
      ("Success!", OK)
    }
  }

  def handleRequest(id:String, date:DateTime) = {
    val chained = errorIfInvalidId orElse errorIfFutureDate orElse businessLogic
    val result: (String, Int) = chained(id, date)

    // make some sort of a response out of the message and status code
    // e.g. in the Play framework...
    Status(result._2)(result._1)
  }
}

I like this pattern because it's very expressive - you can easily grasp what the controller method logic is just by looking at the chained functions. And, I can easily mix and match different verification steps for different requests.

The problem is that as I try to expand this pattern it starts to break down. Suppose my next controller takes an id I want to validate, but does not have the date parameter, and maybe it has some new parameter of a third type that does need validation. I don't want to keep expanding that tuple to (String, DateTime, Other) and have to pass in a dummy DateTime or Other. I want to have partial functions that accept different types of arguments (they can still return the same type). But I can't figure out how to compose them.

For a concrete question - suppose the example validator methods are changed to look like this:

val errorIfInvalidId:PartialFunction[String, (String, Int)] = {
  case id if (id == "invalid") => ("Error, Invalid ID!", BAD_REQUEST)
}

val errorIfInvalidDate:PartialFunction[DateTime, (String, Int)] = {
  case date if (date.isAfter(DateTime.now)) => ("Error, date in future!", BAD_REQUEST)
}

Can I still chain them together? It seems like I should be able to map the tuples to them, but I can't figure out how.

by ryryguy at July 29, 2014 11:14 AM

filter collection based on value of field

I have this collection

class ConvEntry(designation: String, saeThick: Double, common: Boolean)
val convList = immutable.List(
      new ConvEntry("16 Gauge", 0.0598, true),
      new ConvEntry("1/16th Inch", 0.0625, true),
      new ConvEntry("15 Gauge", 0.0673, false),
      new ConvEntry("14 Gauge", 0.0747, false),
      new ConvEntry("13 Gauge", 0.0897, false),
      new ConvEntry("12 Gauge", 0.1046, true),
      new ConvEntry("11 Gauge", 0.1196, false),
      new ConvEntry("1/8th Inch", 0.1250, true),
      new ConvEntry("10 Gauge", 0.1345, false),
      new ConvEntry("0.160 Inch", 0.1600, false),
      new ConvEntry("8 Gauge", 0.1644, false),
      new ConvEntry("3/16th Inch", 0.1875, true),
      new ConvEntry("0.190 Inch", 0.1900, false),
      new ConvEntry("0.204 Inch", 0.2040, false),
      new ConvEntry("1/4 Inch", 0.2500, true),
      new ConvEntry("5/16th Inch", 0.3125, true),
      new ConvEntry("3/8th Inch", 0.3750, true),
      new ConvEntry("7/16th Inch", 0.4375, true),
      new ConvEntry("1/2 Inch", 0.5000, true),
      new ConvEntry("9/16th Inch", 0.5625, true),
      new ConvEntry("5/8th Inch", 0.6250, true),
      new ConvEntry("11/16th Inch", 0.6875, true),
      new ConvEntry("3/4th Inch", 0.7500, true),
      new ConvEntry("13/16th Inch", 0.8125, true),
      new ConvEntry("7/8 Inch", 0.8750, true),
      new ConvEntry("1 Inch", 1.0000, true),
      new ConvEntry("1 1/4 Inch", 1.2500, true),
      new ConvEntry("1 1/2 Inch", 1.5000, true),
      new ConvEntry("1 3/4 Inch", 1.7500, true),
      new ConvEntry("2 Inch", 2.0000, true),
      new ConvEntry("2 1/2 Inch", 2.5000, true)
)

What im trying to figure out how to do is filter the collection based on various values in the fields. i need an algorithm based on the list of true values, a list of false values, and a i need to find the entry directly above and below a given number.

is this possible with collections or do i need to do the old fashioned brute force loop method.

by scphantm at July 29, 2014 11:11 AM

Portland Pattern Repository

StackOverflow

Why the RDD is not persisted in memory for every iteration in spark?

I use the spark for machine learning application. The spark and hadoop share the same computer clusters with out any resource manger such as yarn. We can run hadoop job while running spark task.

But the machine learning application run so slowly. I found that for every interation, some workers need to add some rdd into memory. Just like this:

243413 14/07/23 13:30:07 INFO BlockManagerMasterActor$BlockManagerInfo: Added rdd_2_17 in memory on XXX:48238 (size: 118.3 MB, free: 16.2 GB)
243414 14/07/23 13:30:07 INFO BlockManagerMasterActor$BlockManagerInfo: Added rdd_2_17 in memory on XXX:48238 (size: 118.3 MB, free: 16.2 GB)
243415 14/07/23 13:30:08 INFO BlockManagerMasterActor$BlockManagerInfo: Added rdd_2_19 in memory on TS-XXX:48238 (size: 119.0 MB, free: 16.1 GB)

So, I think the recomputing for reload the rdd make the application so slowly.

Then, my question is why the rdd was not persisted in the memory when there was enough free memory? because of the hadoop jobs?


I add the following jvm parameters: -Xmx10g -Xms10g

I found there was less rdd add actions than before, and the task run time was shorter than before. But the total time for one stage is also too large. From the webUI, I found that:

For every stage, all the workers were not start at the same time. For example, when the worker_1 finished 10 tasks, the worker_2 appear on the webUI and start the tasks. And this leds to a long time stage.


our spark cluster works in standalone model.

by Tim at July 29, 2014 10:44 AM

QuantOverflow

Probability of Hyperinflation as a function of Probability of Soverign Default

I'm looking for some academic research on modeling risk of hyperinflation. Specifically, I'm interested in modeling the probability of hyperinflation over some time interval (e.g., probability of hyperinflation in Argentina within the next year).

I'm familiar with numerous macroeconomic models which are related to inflation, but I'm looking for something a bit different. Clearly sovereign default hyperinflation are related, but I'd like to estimate some function to convert between the two. For example, we can infer probability of default from CDS rates. Then, I'd like to use that to determine probability of hyperinflation. Are there any problems with attempting to approach this problem using this method?

Any references or guidance would be appreciated. FWIW, my technical background in mathematics, stats, finance is relatively advanced. i.e., don't be discouraged from sharing any references using stochastic calculus, vector autoregression, etc.

Thanks!

by nsw at July 29, 2014 10:43 AM

Planet Emacsen

Irreal: Emacs Autoloads

Over at lunarsite, Sebastian Wiesner has a very nice post that explains the ins and outs of autoloads in Emacs lisp. Mostly, autoloads take care of themselves but sometimes users do need to interact with them. Here’s an example that I learned from Steve Purcell.

Of course, if you’re writing packages you will need to understand how autoloads work and when to use them. Wiesner’s post gives you all the information you need. Definitely worth a read even if you’re not writing your own packages.

by jcs at July 29, 2014 10:40 AM

Fred Wilson

The Micro And Macro Of Mobile

Here’s a great podcast featuring my favorite analyst Benedict Evans, talking about macro and micro stuff in mobile.

by Fred Wilson at July 29, 2014 10:10 AM

/r/freebsd

md5 check on files FreeBSD?

I created a new NAS system with NAS4FREE. Before I transferred all the files from an old NAS I create an md5 checksum of all the files. I now have the files on the new NAS4FREE box but am uncertain how to run the md5 checksum to make sure the files made the transfer fine. I have a disk.md5 file on the NAS4FREE box and just want to now run it to verify file integrity. I am having troubles figuring this out via CLI :( ...please let me know, I would just it to be verbose as it checks the files so I can see that it is running. Thanks!!

Added more description: Actually if I run 'md5 -c /pool0/data/disk1.md5' it end instantly without error. The old system I had running had three disks: /mnt/disk1 /mnt/disk2 /mnt/disk3 I created a disk1.md5, disk2.md5, disk3.md5 disk1.md5 points to /mnt/disk1/file1.mp3 disk2.md5 points to /mnt/disk2/file2.mp3 disk3.md5 points to /mnt/disk3/file3.mp3 and the new data structures has the combination of these three disks' files into my zpool. The folder structure and everything is the exact same except for the mount point of the pool is at '/pool0/data'. So the files structure is like this: '/pool0/data/file1.mp3' '/pool0/data/file2.mp3' '/pool0/data/file3.mp3' So since the disk1.md5 points to /mnt/disk1/file1.mp3 I created a symlink on a folder /mnt/dsik1 so it looks just like it did on the old file server. I should be able to run 'md5 -c /pool0/data/disk1.md5' and it should run through the list of files and check them but it doesn't. Any ideas?

submitted by n0hc
[link] [19 comments]

July 29, 2014 10:08 AM

StackOverflow

Should I generate idea project with command line or should I import with the SBT plugin?

I have had problems with openning project generated from build.sbt file. I prefer the command line approach because it seems more standard. But I got an error when compiling the project in IDE:

Error:scalac: Output path xxx is shared between: Module 'domainRegistrar-build' tests, Module 'domain_registrar-build' tests
......

by XiaoPeng at July 29, 2014 10:08 AM

Akka SLF4J logback configuration and usage

I have done the following steps to try and configure logging for my akka application:

  • created an application.conf file and placed it in src/main/resources. It looks like:

    
        akka { 
          event-handlers = ["akka.event.slf4j.Slf4jEventHandler"] 
          loglevel = "INFO"
        }
    

  • created a logback.xml file and placed it in src/main/resources. It looks like:

    <configuration>
    
      <appender name="FILE" class="ch.qos.logback.core.fileappender">
        <File>./logs/akka.log</File>
        <encoder>
          <pattern>%d{HH:mm:ss.SSS} [%-5level] %msg%n</pattern>
        </encoder>
      </appender>
    
      <root level="info">
        <appender-ref ref="FILE" />
      </root>
    
    </configuration>
    
  • added the following to my .scala sbt build file:


    libraryDependencies += "com.typesafe.akka" % "akka-slf4j" % "2.0.3", libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.0.9" lazy val logback = "ch.qos.logback" % "logback-classic" % "1.0.9"

  • attempted this code to log:

    
        import akka.event.Logging
    val log = Logging(context.system, this) log.info("...")

All I am getting is standard output logging, no log file creation with the logs.

Have I missed a step ? Or misconfigured something?

by Apple Pie at July 29, 2014 10:01 AM

Planet Clojure

StackOverflow

How to avoid losing type information

Suppose I have something like this:

trait Cursor {
}

trait Column[T] {
   def read(cusor: Cursor): T
}

trait ColumnReader {
   def readColumns(columns: Product[Column[_]], cursor: Cursor): Iterable[Any] = {
       for (column <- columns) yield column.read(cursor)
   }
}

The problem of the readColumns() API is that I lose the type information, i.e., if I have this:

object columnString extends Column[String] {
   def read(cursor: Cursor): String = ...
}

object columnInt extends Column[Int] {
   def read(cursor: Cursor): Int = ...
}

An expression like new ColumnReader().read((columnString, columnInt)) returns Iterable[Any]. I would like to return something typed like Tuple2[String, Int], but don't know how. I lose type information useful to the compiler.

Maybe a library like Shapeless could be useful.

I'm sure Scala has some tool for dealing on problems like this.

Any ideas?

by david.perez at July 29, 2014 09:40 AM

QuantOverflow

Why are we obsessed over normalizing financial data?

I have recently began work on some high frequency financial tick data. I have been told to 'normalize' the data as much as possible and run linear regressions through them. In fact, the data doesn't seem to be anymore linear after I did transformations on them (box-cox/log etc) I understand the linear regressions bit, but most financial data is non-normal anyway, so why bother normalizing?

A puzzled junior

by user2158552 at July 29, 2014 09:10 AM

Inflation-Linked Bonds & Asset Swap Spreads

I am trying to plot the asset swap spreads of government inflation-linked bonds (ILBs) versus the asset swap spread of government nominal (plain-vanilla) reference bonds.

I used the article in the link below:

http://www.risk.net/risk-magazine/feature/1515067/how-read-asset-swap-prices-inflation-linked-bonds

My questions/concerns:

a.) I have conceptual concerns using the net-proceeds asset swap structure (let me qualify that that by saying, given my understandings). My understanding is that we are trying to solve for the asset swap spread (which is built into the floating leg of the asset swap) which sets:

PV(Fixed)-PV(Floating)=0

where Fixed denotes the fixed leg of the swap and Floating the floating leg of the swap. I felt more comfortable with the par asset swap structure - solving for the asset swap spread which sets:

AIP-PV(Fixed)-PV(Floating)=100

where AIP is the current bond all-in-price. Why I liked this was because if a bond was issued with a high coupon rate (relative to current interest rate environment) but had an all-in-price less than par (100), one would conclude the bond had poorer credit quality (relatively speaking - and just assume there is no liquidity premium, etc.) This was then matched by a larger asset swap spread - ie. as holder of the bond I am compensated more for its inferior credit quality.

But I don't see this mechanism in the net-proceeds asset swap because the all-in-price is not built into the structure (in the par asset swap structure, at initiation you pay par for a bond whose current value is the all in price, while under the net-proceeds structure, you pay the all-in-price (so that (AIP-100) term is not present in the net-proceeds structure as it is in the par-asset swap structure)

b.) Anyway having used the net-proceeds for the ILBs, the graph of the ILB asset swap spread is completely different to the Nominal asset swap spread - the ILB spreads are roughly around double the size of Nominal spreads and the shape of the graph (vs. maturity of the bonds) is erratic and wholly different to the shape of the Nominal curve

Now this may be due to the different credit risk profile of the ILB versus a plain vanilla nominal bond (explained in the article in the link above). But the article fails to cover how to account/compensate for this differing credit structure (so that we could compare the ILB spreads to its reference nominal bond's spread). How would one account for this?

Does anyone have an idea how one should go about this? or more generally to model the asset swap spread for ILBs?

Any help is greatly appreciated

by Nick at July 29, 2014 09:08 AM

StackOverflow

how to get value from counter Column in cassandra with multiple row keys?

I have one column family that has multiple counter columns. Now I want to get their value behalf of different row keys, Means like RangeSlicesQuery or MultigetSliceQuery, I want to apply on counter column please give me way in counter column.

by Rohit Sharma at July 29, 2014 09:05 AM

Gatling: Get REST resource, change JSON leaf, Post back

I am working on a test script, testing a REST interface in Gatling using Scala.

For a specific REST resource, this is what I would like to achieve:

  1. Get the resource (which will give me JSON data in the body).
  2. Use jsonpath to change a value in the body.
  3. Post the modified body back to the same url

I have managed to success with 1 and 3. The only problem left is to change the json data which seem to be in string format.

Test steps

object WebTestCollection {


    def getAccountDetails() = http("Get account")
      .get("/account")
      .check(jsonPath("$.billingAccount").saveAs("accountjson"))


    def postNewAccountDetails() = http("Post modified account")
      .post("/account").asJSON
      .body("${accountjson}")

}

Part of the scenario

val scn = scenario("Web Usage")
    .feed(testRuns)


        .exec(WebTestCollection.getAccountDetails())
        .exitHereIfFailed

        .exec(session => {
              var account = session.getAttribute("accountjson")
              account.notes = "Performance Test Comment"
              println(account)
              session.setAttribute("accountjson", account)
          }
        )       

        .exec(WebTestCollection.postNewAccountDetails())
        .exitHereIfFailed

I get the following errors

09:59:30.267 [ERROR] c.e.e.g.a.ZincCompiler$ - <snip>/WebScenario.scala:172: value notes is not a member of Any
09:59:30.268 [ERROR] c.e.e.g.a.ZincCompiler$ -            account.notes = "Performance Test Comment"
09:59:30.269 [ERROR] c.e.e.g.a.ZincCompiler$ -                                   ^
09:59:30.664 [ERROR] c.e.e.g.a.ZincCompiler$ - one error found
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at scala_maven_executions.MainHelper.runMain(MainHelper.java:164)
    at scala_maven_executions.MainWithArgsInFile.main(MainWithArgsInFile.java:26)
Caused by: Compilation failed
    at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:76)
    at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:35)
    at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:29)
    at sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply$mcV$sp(AggressiveCompile.scala:71)
    at sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply(AggressiveCompile.scala:71)
    at sbt.compiler.AggressiveCompile$$anonfun$4$$anonfun$compileScala$1$1.apply(AggressiveCompile.scala:71)
    at sbt.compiler.AggressiveCompile.sbt$compiler$AggressiveCompile$$timed(AggressiveCompile.scala:101)
    at sbt.compiler.AggressiveCompile$$anonfun$4.compileScala$1(AggressiveCompile.scala:70)
    at sbt.compiler.AggressiveCompile$$anonfun$4.apply(AggressiveCompile.scala:88)
    at sbt.compiler.AggressiveCompile$$anonfun$4.apply(AggressiveCompile.scala:60)
    at sbt.inc.IncrementalCompile$$anonfun$doCompile$1.apply(Compile.scala:24)
    at sbt.inc.IncrementalCompile$$anonfun$doCompile$1.apply(Compile.scala:22)
    at sbt.inc.Incremental$.cycle(Incremental.scala:52)
    at sbt.inc.Incremental$.compile(Incremental.scala:29)
    at sbt.inc.IncrementalCompile$.apply(Compile.scala:20)
    at sbt.compiler.AggressiveCompile.compile2(AggressiveCompile.scala:96)
    at sbt.compiler.AggressiveCompile.compile1(AggressiveCompile.scala:44)
    at com.typesafe.zinc.Compiler.compile(Compiler.scala:158)
    at com.typesafe.zinc.Compiler.compile(Compiler.scala:142)
    at com.excilys.ebi.gatling.app.ZincCompiler$.apply(ZincCompiler.scala:104)
    at com.excilys.ebi.gatling.app.SimulationClassLoader$.fromSourcesDirectory(SimulationClassLoader.scala:34)
    at com.excilys.ebi.gatling.app.Gatling$$anonfun$12.apply(Gatling.scala:89)
    at com.excilys.ebi.gatling.app.Gatling$$anonfun$12.apply(Gatling.scala:89)
    at scala.Option.getOrElse(Option.scala:108)
    at com.excilys.ebi.gatling.app.Gatling.start(Gatling.scala:89)
    at com.excilys.ebi.gatling.app.Gatling$.fromMap(Gatling.scala:54)
    at com.excilys.ebi.gatling.app.Gatling$.runGatling(Gatling.scala:74)
    at com.excilys.ebi.gatling.app.Gatling$.main(Gatling.scala:49)
    at com.excilys.ebi.gatling.app.Gatling.main(Gatling.scala)
    ... 6 more

by www.jensolsson.se at July 29, 2014 08:58 AM

Recaptcha with scala and play framework

by using tutorial , i am trying to apply captcha in my play project.

HTML

<form action="/applyForWork" method="post" enctype="multipart/form-data">
        <input type="text" name="relevant" id="relevant" >
        <input type="file" name="file"/>
        <br/>
        @Html(views.ReCaptcha.render())
        <br/>
        <input type="submit" value="Upload"/>
</form>

Controller

def applyForWork = Action {
    implicit request =>
      println(request.body.asFormUrlEncoded) //None
      Ok("submitted")
  }

Q1.why this println(request.body.asFormUrlEncoded) gives None?


Q2.captcha box is shown in my html but how to validate it is correct or not?

I am using scala 2.10 with play framework 2.2

by Govind Singh Nagarkoti at July 29, 2014 08:57 AM

Scala framework for a Rest API Server?

We are thinking on moving our Rest API Server (it is inside the web service, on Symfony PHP) to Scala for several reasons: speed, no overhead, less CPU, less code, scalability, etc. I didn't know Scala until several days ago but I've been enjoying what I've been learning these days with the Scala book and all the blog posts and questions (it's not so ugly!)

I have the following options:

  • build the Rest API Server from scratch
  • use a tiny Scala web framework like Scalatra
  • use Lift

Some things that I will have to use: HTTP requests, JSON output, MySQL (data), OAuth, Memcache (cache), Logs, File uploads, Stats (maybe Redis).

What would you recommend?

by fesja at July 29, 2014 08:54 AM

/r/netsec

StackOverflow

ansible playbooks get which varible file by default if not defined

I have a devop directory containing ansible's varible directroy , plabooks and inventory directory

The directory looks like this

|groups_vars
      -all.yml
      -development.yml
      -staging.yml
|inventroy
      - staging
      - development

configure.yml
deploy.yml

configure.yml and deploy.yml contains task that are applied to either staging or development machines using variable in groups_vars

Know if i call ansible-playbook command with staging inventory. How will it know which variable file to use. The varfile task in not added to configure.yml and deploy.yml

By the way am using an example from the company i work and the example is working I just want to know the magic that is happening it is using the right variable file though the var file is not incuded in the configure.yml nor deploy.yml

by user2388404 at July 29, 2014 08:10 AM

CompsciOverflow

Stopping condition for goal-directed bidirectional search for shortest path

So I have a graph and need to find shortest path between two points in it. I need1 to do it it using bidirectional search. The bidirectional search should be goal-directed, i.e. A*.

So let $l(u,v)$ be length of the (oriented) edge $u,v$, $\pi_f(v)$ the potential of vertex $v$ in forward search and $\pi_r(v)$ potential of vertex $v$ in reverse search and $d(u,v)$ length of the shortest path from $u$ to $v$. Let $s$ be start vertex, $t$ goal vertex. The algorithm selects vertices by $d(s,v)+\pi_f(v)$ forward and $d(v,t)+\pi_r(v)$ reverse.

Let's call $\mu$ the length of the shortest path found so far, $n_f$ the vertex on top of forward queue and $n_r$ the vertex on top of reverse queue. I found two ways

  1. The obvious option is to stop forward when $d(s,n_f)+\pi_f(n_f)\geq\mu$ and reverse when $d(n_r,t)+\pi_r(n_r)\geq\mu$. It is also not needed to process edges that were already processed in the other direction. Here the $\pi_f$ and $\pi_r$ are independent can be very specific, but the algorithm may need to continue quite long after the shortest path was actually found if the potential function is significantly underestimating.

  2. Create a pair of consistent potential functions as defined in this lecture. The requirements are given as $$\pi_f(u) + \pi_r(u) == \pi_f(v) + \pi_r(v)$$ for each edge $u,w$ (which really means the sum has to be constant over the whole graph). Without loss of generality we can make $\pi_r(v) = -\pi_f(v)$ and use the normal stopping condition from non-goal-directed search expressed as $$d(s,m_f)+\pi_f(m_f) + d(m_r,t)+\pi_r(m_r) \geq \mu+\pi_r(t)$$ (assuming we shift $\pi$ so that $\pi_f(s) = 0$).

    This allows easier stop, but the potential function has to only indicate whether $v$ is closer to start or goal and for vertex (equally) far from both will be the same as for vertex in the middle of the shortest path. Therefore it will be less specific.

Now what I am looking for is:

  1. anything that would give me idea which would be more efficient (without having to implement both and test them) and
  2. whether the second can even be used if the heuristics is not monotonous, i.e. when $d(u,v) - \pi_f(u) + \pi_f(v) \ge 0$ does not hold (the linked lecture assumes that, but not doing so could save me a lot of data and I/O is a bottleneck, so I would prefer not to even though it means occasionally having to reprocess vertex).

1Some important optimization techniques can only be applied to bidirectional search.

by Jan Hudec at July 29, 2014 08:08 AM

Fefe

Eine ordentliche Nachtruhe ist gut gegen Brustkrebs. ...

Eine ordentliche Nachtruhe ist gut gegen Brustkrebs. Wenn es nicht richtig dunkel ist, dann wird weniger Melatonin produziert, und Melatonin hemmt Tumore. Der Effekt ist noch viel stärker in Kombination mit einem bestimmten Brustkrebsmedikament, aber auch ohne Medikamente hilft Melatonin gegen Krebs. Wer 30 Jahre lang Nachtschichten schiebt, verdoppelt das Brustkrebsrisiko.

July 29, 2014 08:02 AM

Portland Pattern Repository

QuantOverflow

Negative Risky vs Negative Butterfly

I understand that in regard to FX options, a volatility smile with negative Risk Reversals is effectively indicating that the spot market for a given currency pair is in decline (puts over).

In similar lay man terms, can anyone help me understand what a negative butterfly in the vol smile implies for the underlying ccy pair?

Thanks and Regards, John

by John Woods at July 29, 2014 07:34 AM

Undeadly

Call for Testers: radeondrm(4) updates

Jonathan Gray (jsg@) posted a call for testers for radeondrm(4) updates:

I'm looking for a few people to test some additional radeondrm fixes from the recently released Linux 3.8.13.27: https://lkml.org/lkml/2014/7/25/621

In particular on newer asics with displayport/eDP as I can only test on r100/lvds at the moment.

July 29, 2014 07:18 AM

/r/compsci

Resources on Evolutionary Algorithms

Can anyone recommend books or other resources for someone trying to get a running start applying evolutionary algorithms and the concepts of evolutionary computation?

submitted by tragoh
[link] [13 comments]

July 29, 2014 07:10 AM

StackOverflow

Are there standards for classification of Programming Languages [on hold]

Are there any existing standards for classification/categorization of programming languages including the classification/categorization criteria

by Manuj at July 29, 2014 07:09 AM

Grails 1.3.9 and grails-melody: javamelody.jar not packaged when building war with OpenJDK 1.6

We are maintaining a webapp developed in Grails 1.3.9.

For monitoring performance, the app had grails-melody 1.21 plugin installed.

It seems that the plugin is not available in the repositories any more for grails 1.3.x. I downloaded it from google-code as suggested in the documentation.

Another post in stackoverflow suggests that zipped plugins can be put in lib and then referenced from BuildConfig.groovy.

plugins {
    runtime ":hibernate:1.3.9"
    build ":tomcat:1.3.9"
    compile ":dojo:1.6.1.17"

    // Downloaded from
    // https://code.google.com/p/javamelody/downloads/list?can=1&q=grails
    // Installed from lib
    // http://stackoverflow.com/questions/15751285/whats-the-correct-way-to-install-a-grails-plugin-from-a-zip-file

    compile ":grails-melody:1.21"
}

I did that and this procedure worked fine when building war file with Oracle JDK 7 (on Ubuntu 14.04). I had to rename grails-grails-melody-1.21.zip to grails-melody-1.21.zip so that it was found.

$ java -version
java version "1.7.0_65"
Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

During the build, melody's dependencies were loaded at the beginning of the build process, when grails file were copies as well:

...
Downloading: /home/matejk/devel/grails/grails-1.3.9/lib/servlet-api-2.5.jar ...
Download complete.
Downloading: /home/matejk/devel/grails/grails-1.3.9/lib/jsp-api-2.1.jar ...
Download complete.
Downloading: http://repo1.maven.org/maven2/net/bull/javamelody/javamelody-core/1.44.0/javamelody-core-1.44.0.pom ...
Download complete.
Downloading: http://repo1.maven.org/maven2/net/bull/javamelody/javamelody-core/1.44.0/javamelody-core-1.44.0.pom.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/org/jrobin/jrobin/1.5.9/jrobin-1.5.9.pom ...
Download complete.
Downloading: http://repo1.maven.org/maven2/org/jrobin/jrobin/1.5.9/jrobin-1.5.9.pom.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/com/lowagie/itext/2.1.7/itext-2.1.7.pom ...
Download complete.
Downloading: http://repo1.maven.org/maven2/com/lowagie/itext/2.1.7/itext-2.1.7.pom.sha1 ...
Download complete.
Downloading: /home/matejk/devel/grails/grails-1.3.9/lib/groovy-all-1.7.8.jar ...
Download complete.
Downloading: /home/matejk/devel/grails/grails-1.3.9/lib/commons-beanutils-1.8.0.jar ...
Download complete.
...
Downloading: http://repo1.maven.org/maven2/net/bull/javamelody/javamelody-core/1.44.0/javamelody-core-1.44.0.jar ...
Download complete.
Downloading: http://repo1.maven.org/maven2/net/bull/javamelody/javamelody-core/1.44.0/javamelody-core-1.44.0.jar.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/com/lowagie/itext/2.1.7/itext-2.1.7.jar ...
Download complete.
Downloading: http://repo1.maven.org/maven2/com/lowagie/itext/2.1.7/itext-2.1.7.jar.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/org/jrobin/jrobin/1.5.9/jrobin-1.5.9.jar ...
Download complete.
Downloading: http://repo1.maven.org/maven2/org/jrobin/jrobin/1.5.9/jrobin-1.5.9.jar.sha1 ...
Download complete.
Downloading: /home/matejk/devel/grails/grails-1.3.9/lib/aspectjweaver-1.6.8.jar ...
...

Resulting war file had javamelody, jrobin and itext jars in WEB-INF/lib.

However, the requirement is to build the app with JDK 1.6 on another machine (Jenkins) where clean checkout of sources is done for every build.

java -version
java version "1.6.0_31"
OpenJDK Runtime Environment (IcedTea6 1.13.3) (6b31-1.13.3-1ubuntu1)
OpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)

Download of jars happened later in the build process:

Executing hibernate-1.3.9 plugin post-install script ...
Plugin hibernate-1.3.9 installed
Installing zip /var/lib/jenkins/.ivy2/cache/org.grails.plugins/grails-melody/zips/grails-melody-1.21.0.zip... ...
    [mkdir] Created dir: /var/lib/jenkins/workspace/etermin-2.4/target/projects/etermin-2.4/plugins/grails-melody-1.21
    [unzip] Expanding: /var/lib/jenkins/.ivy2/cache/org.grails.plugins/grails-melody/zips/grails-melody-1.21.0.zip into /var/lib/jenkins/workspace/etermin-2.4/target/projects/etermin-2.4/plugins/grails-melody-1.21
Installed plugin grails-melody-1.21 to location /var/lib/jenkins/workspace/etermin-2.4/target/projects/etermin-2.4/plugins/grails-melody-1.21. ...
Resolving plugin JAR dependencies ...
Downloading: http://repo1.maven.org/maven2/net/bull/javamelody/javamelody-core/1.44.0/javamelody-core-1.44.0.pom ...
Download complete.
Downloading: http://repo1.maven.org/maven2/net/bull/javamelody/javamelody-core/1.44.0/javamelody-core-1.44.0.pom.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/org/jrobin/jrobin/1.5.9/jrobin-1.5.9.pom ...
Download complete.
Downloading: http://repo1.maven.org/maven2/org/jrobin/jrobin/1.5.9/jrobin-1.5.9.pom.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/com/lowagie/itext/2.1.7/itext-2.1.7.pom ...
Download complete.
Downloading: http://repo1.maven.org/maven2/com/lowagie/itext/2.1.7/itext-2.1.7.pom.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/net/bull/javamelody/javamelody-core/1.44.0/javamelody-core-1.44.0.jar ...
Download complete.
Downloading: http://repo1.maven.org/maven2/net/bull/javamelody/javamelody-core/1.44.0/javamelody-core-1.44.0.jar.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/com/lowagie/itext/2.1.7/itext-2.1.7.jar ...
Download complete.
Downloading: http://repo1.maven.org/maven2/com/lowagie/itext/2.1.7/itext-2.1.7.jar.sha1 ...
Download complete.
Downloading: http://repo1.maven.org/maven2/org/jrobin/jrobin/1.5.9/jrobin-1.5.9.jar ...
Download complete.
Downloading: http://repo1.maven.org/maven2/org/jrobin/jrobin/1.5.9/jrobin-1.5.9.jar.sha1 ...
Download complete.

However, when creating war with that JDK, javamelody, jrobin and itext jars are not packaged.

Consequently deployment and startup of webapp fails.

SEVERE: Error configuring application listener of class net.bull.javamelody.SessionListener
java.lang.ClassNotFoundException: net.bull.javamelody.SessionListener
        at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680)
        at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526)
        at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4153)
  • Is the procedure to install plugin from local directory correct?
  • Any ideas how to resolve this problem to make jars to be packaged into war?

Thanks,

Matej

EDIT: After removing grails files in ~/.grails/1.3.9/ the behaviour with JDK 6 and JDK 7 is the same:

jar files are not packaged and deployment fails.

by matejk at July 29, 2014 07:08 AM

Undeadly

g2k14: Ted Unangst on the Art of the Tedu

Ted Unangst (tedu@) talks about teduing a goodly amount of code, among other things:

Despite being in the same room as many other LibreSSL developers for the first time (since the beginning of LibreSSL at least), I didn't do too much work on that front. I did remove the compression feature (as made famous by the CRIME attack; not all protocols or deployments are vulnerable, but we're also aiming for a simpler feature set overall) and made a few other cleanups. While it's very helpful to be in the same room as other hackers to exchange ideas, having everyone pounding on the source at the same time is a little troublesome so I elected to stay out of the way.

Read more...

July 29, 2014 07:07 AM

StackOverflow

Light-weight library for recurring billing using Clojure web stacks?

I want to implement simple, light-weight recurring billing for a ring-based web app.

Here's what I found:

  • clj-stripe for Stripe API, but this also means that I have to use stripe.com for payment management.
  • Apache Ofbiz, a heavy, full-featured e-commerce framework that I can always inter-op with, but it's too heavy.
  • shopify-clojure, a Shopify wrapper that stopped updating in 2013.

What's the recommended way for basic billing management in Clojure? Ideally something like ActiveMerchant in RoR.

by Minos Niu at July 29, 2014 07:05 AM

Jenkins Fails to Start

We have a server running FreeBSD 9.1-p17 and Jenkins. I interact with it via PuTTY. We upgraded from Jenkins 1.458 to 1.570, via FreeBSD's ports collection. Due to this problem with starting, we decided to reinstall.

First we uninstalled Jenkins, then we moved the main Jenkins folder (/usr/local/eweru-dev/jenkins) to a backup location, and reinstalled (again, from the ports collection). When we reinstalled, we kept the user 'jenkins' from the last install.

Now, when we try to start Jenkins, we get an error. The error below is from when we try to start it by navigating to /usr/local/share/jenkins and typing java -jar jenkins.war. When we try to run it as a service (with service jenkins onestart), we get a very similar message.

The exception looks similar to the one from this blog, but I have tried connecting Jenkins to openjdk 7 and 8 to no avail.

Is information from our old Jenkins install finding its way into this one, breaking stuff? Or maybe there's some compatibility issue with FreeBSD 9.1.

Running from: /usr/local/share/jenkins/jenkins.war
webroot: $user.home/.jenkins
Jul 18, 2014 10:53:51 AM winstone.Logger logInternal
INFO: Beginning extraction from war file
Jul 18, 2014 10:53:51 AM org.eclipse.jetty.util.log.JavaUtilLog info
INFO: jetty-8.y.z-SNAPSHOT
Jul 18, 2014 10:53:55 AM org.eclipse.jetty.util.log.JavaUtilLog info
INFO: NO JSP Support for , did not find org.apache.jasper.servlet.JspServlet
Jenkins home directory: /homes/maxerdwien/.jenkins found at: $user.home/.jenkins
Jul 18, 2014 10:53:55 AM hudson.util.BootFailure publish
SEVERE: Failed to initialize Jenkins
hudson.util.AWTProblem: java.lang.NullPointerException
        at hudson.WebAppMain.contextInitialized(WebAppMain.java:182)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:782)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
        at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:774)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1242)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:717)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:494)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:95)
        at org.eclipse.jetty.server.Server.doStart(Server.java:282)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at winstone.Launcher.<init>(Launcher.java:154)
        at winstone.Launcher.main(Launcher.java:354)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at Main._main(Main.java:293)
        at Main.main(Main.java:98)
Caused by: java.lang.NullPointerException
        at sun.awt.X11FontManager.getDefaultPlatformFont(X11FontManager.java:779)
        at sun.font.SunFontManager$2.run(SunFontManager.java:433)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.font.SunFontManager.<init>(SunFontManager.java:376)
        at sun.awt.X11FontManager.<init>(X11FontManager.java:32)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at java.lang.Class.newInstance(Class.java:374)
        at sun.font.FontManagerFactory$1.run(FontManagerFactory.java:83)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.font.FontManagerFactory.getInstance(FontManagerFactory.java:74)
        at java.awt.Font.getFont2D(Font.java:490)
        at java.awt.Font.getFamily(Font.java:1219)
        at java.awt.Font.getFamily_NoClientCode(Font.java:1193)
        at java.awt.Font.getFamily(Font.java:1185)
        at java.awt.Font.toString(Font.java:1682)
        at hudson.util.ChartUtil.<clinit>(ChartUtil.java:229)
        at hudson.WebAppMain.contextInitialized(WebAppMain.java:181)
        ... 19 more

Jul 18, 2014 10:53:56 AM org.eclipse.jetty.util.log.JavaUtilLog warn
WARNING: Failed startup of context w.{,file:/home/maxerdwien/.jenkins/war/},/homes/maxerdwien/.jenkins/war
java.lang.NullPointerException
        at jenkins.util.groovy.GroovyHookScript.run(GroovyHookScript.java:63)
        at hudson.util.BootFailure.publish(BootFailure.java:43)
        at hudson.WebAppMain.contextInitialized(WebAppMain.java:244)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:782)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
        at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:774)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1242)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:717)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:494)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:95)
        at org.eclipse.jetty.server.Server.doStart(Server.java:282)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
        at winstone.Launcher.<init>(Launcher.java:154)
        at winstone.Launcher.main(Launcher.java:354)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at Main._main(Main.java:293)
        at Main.main(Main.java:98)

Jul 18, 2014 10:53:56 AM org.eclipse.jetty.util.log.JavaUtilLog info
INFO: Started SelectChannelConnector@0.0.0.0:8080
Jul 18, 2014 10:53:56 AM winstone.Logger logInternal
INFO: Winstone Servlet Engine v2.0 running: controlPort=disabled

Any help would be very appreciated. I've been googling for days.

by max at July 29, 2014 07:03 AM

How can I customize Scala ambiguous implicit errors when using shapeless type inequalities

def typeSafeSum[T <: Nat, W <: Nat, R <: Nat](x: T, y: W)
         (implicit sum: Sum.Aux[T, W, R], error: R =:!= _7) = x

typeSafeSum(_3, _4) //compilation error, ambiguous implicit found.

I dont think that error message "ambiguous implicit found" is friendly, how can I customize it to say something like "the sum of 2 NAT value should not equal to 7"

Many thanks in advance

by Cloud tech at July 29, 2014 06:51 AM

QuantOverflow

Underlying changes impact on implied volatility

What are some valid techniques that can be used to simulate how changes in the underlying are most likely to impact implied volatility along with the skew of all strikes for options with the same expiration. I am interested in both equity index and commodities.

by user3803295 at July 29, 2014 06:23 AM

StackOverflow

What is Scala way of finding whether all the elements of an Array has same length?

I am new to Scala and but very old to Java and had some understanding working with FP languages like "Haskell".

Here I am wondering how to implement this using Scala. There is a list of elements in an array all of them are strings and I just want to know if there is a way I can do this in Scala in a FP way. Here is my current version which works...

def checkLength(vals: Array[String]): Boolean = {
  var len = -1
  for(x <- conts){
    if(len < 0)
      len = x.length()
    else{
      if (x.length() != len)
        return false
      else
        len = x.length()
    }
  }
  return true;
}

And I am pretty sure there is a better way of doing this in Scala/FP...

by Teja Kantamneni at July 29, 2014 06:11 AM

CompsciOverflow

Teaching NP-completeness - Turing reductions vs Karp reductions

I'm interested in the question of how best to teach NP-completeness to computer science majors. In particular, should we teach it using Karp reductions or using Turing reductions?

I feel that the concepts of NP-completeness and reductions are something that every computer science major ought to learn. However, when teaching NP-completeness, I've noticed that the use of Karp reductions has some downsides.

First of all, Karp reductions seem to be unnecessarily confusing for some students. The intuitive notion of a reduction is "if I have an algorithm to solve problem X, then I can use it to solve problem Y, too". That's very intuitive -- but it maps much better to Turing reductions than to Karp reductions. As a result, I see students who are trying to prove NP-completeness get led astray by their intuition and form an incorrect proof. Trying to teach both kinds of reductions and emphasizing this aspect of Karp reductions sometimes feels a little bit like needless formalism and takes up unnecessary class time and student attention on what feels like an inessential technical detail; it's not self-evident why we use this more restricted notion of reduction.

I do understand the difference between Karp reductions and Turing (Cook) reductions, and how they lead to different notions of NP-completeness. I realize that Karp reductions give us a finer granularity of distinctions between complexity classes. So, for serious study of complexity theory, Karp reductions are obviously the right tool. But for computer science students who are just learning this and are never going to go into complexity theory, I'm uncertain whether this finer distinction is critical is critical for them to be exposed to.

Finally, as a student, I remember feeling puzzled when I ran across a problem like "tautology" -- i.e., given a 3CNF formula, check whether it is a tautology. What was confusing was that this problem is clearly hard: any polynomial-time algorithm for it would imply that $P=NP$; and solving this problem is obviously as hard as solving the tautology problem. However, even though intuitively tautology is as hard as satisfiability, tautology is not NP-hard. Yes, I understand today why this is the case, but at the time I remember being puzzled by this. (What went through my head once I finally understood was: Why do we draw this distinction between NP-hard and co-NP-hard, anyway? That seems artificial and not very well-motivated by practice. Why do we focus on NP rather than co-NP? They seem equally natural. From a practical perspective, co-NP-hardness seems to have essentially the same practical consequences as NP-hardness, so why do we get all hung up on this distinction? Yes, I know the answers, but as a student, I remember this just made the subject feel more arcane and poorly motivated.)

So, my question is this. When we teach NP-completeness to students, is it better to teach using Karp reductions or Turing reductions? Has anyone tried teaching the concept of NP-completeness using Turing reductions? If so, how did it go? Would there be any non-obvious pitfalls or disadvantages if we taught the concepts using Turing reductions, and skipped the conceptual issues associated with Karp reductions?


Related: see here and here, which mentions that the reason why we use Karp reductions in the literature is because it enables us to distinguish between NP-hardness and co-NP-hardness. However, it does not seem to give any answer that's focused on a pedagogical perspective of whether this ability is critical for the learning objectives of an algorithms class that should be taken by every CS major. See also here on cstheory.SE, which has a similar discussion.

by D.W. at July 29, 2014 06:06 AM

Bounded existential polymorphism

In Pierce's "Types and Programing Languages" he, at the very end, presents the most powerful system in the book: $F^{\omega}_{<:}$. He, however, does not explain how bounded existential polymorphism would work. I could not find a reference to this online anywhere. Is this just not possible or just not very interesting or what? If this is possible could someone point me to a reference?

by Jake at July 29, 2014 06:03 AM

/r/netsec

StackOverflow

failed to replace string in scala using foldLeft [on hold]

I am trying to convert sql server table DDL to vertica DLL, but sql server have [] while vertica doesn't support it, so I am writing a scala to remove the [] for the data type using the following.

var columns = """
float
datetime
"""

val columnList = columns.split("\n").toList

val lines = """
[col1 name] [float] NULL ,
[col2 name] [datetime] NULL ,
[col3 name] [datetime] NULL ,
""".split("\n").toList

val result = lines.map({ line =>
  columnList.foldLeft(line)({ (x,y) =>
    x.replace(s"[$y]",y)
  })
})

result.foreach(x => println(x))

I hope the output is

[col1 name] float NULL ,
[col2 name] datetime NULL ,
[col3 name] datetime NULL ,

but the output is still

[col1 name] [float] NULL ,
[col2 name] [datetime] NULL ,
[col3 name] [datetime] NULL ,

by Daniel Wu at July 29, 2014 05:53 AM

Scala PackratParsers: backtracking seems not to work

The following scala code fails to work as expected:

import scala.util.parsing.combinator.PackratParsers
import scala.util.parsing.combinator.syntactical.StandardTokenParsers
import scala.util.parsing.combinator.lexical.StdLexical

object Minimal extends StandardTokenParsers with PackratParsers {
  override val lexical = new StdLexical

  lexical.delimiters += ("<", "(", ")")

  lazy val expression: PackratParser[Any] = (
  numericLit
  | numericLit ~ "<" ~ numericLit
  )

  def parseAll[T](p: PackratParser[T], in: String): ParseResult[T] =
    phrase(p)(new PackratReader(new lexical.Scanner(in)))

  def main(args: Array[String]) = println(parseAll(expression, "2 < 4"))
}

I get the error message:

[1.3] failure: end of input expected

2 < 4
  ^

If however I change the definition of "expression" to

  lazy val expression: PackratParser[Any] = (
    numericLit ~ "<" ~ numericLit
  | numericLit
  )

the problem disappears.

The problem seems to be that with the original definition code for "expression" the first rule consisting only of "numericLit" is applied, such that the parser indeed expects the input to end immediately afterwards. I do not understand why the parser does not backtrack as soon as it notices that the input does not indeed end; scala PackratParsers are supposed to be backtracking, and I also made sure to replace "def" by "lazy val" as suggested in the answer to another question.

by emh at July 29, 2014 05:20 AM

Getting and passing structs by value in Clojure with JNA

I have a C API that I'm trying to use within Clojure, through the JNA API. My issue can best be demonstrated with the following example. Say I have this C code in a library:

typedef struct {
    int foo;
    int bar;
    double baz;
} MyStruct;

MyStruct createStruct() {
    MyStruct myStruct;
    myStruct.foo = 3;
    myStruct.bar = 4;
    myStruct.baz = 3.14;

    return myStruct;
}

double addStruct(MyStruct myStruct) {
    return myStruct.foo + myStruct.bar + myStruct.baz;
}

In this example, I'd like to call createStruct, and then pass that result to addStruct. The important point here is that MyStruct is passed by value as both a return type and an argument. At no point do I need to actually read the values of the fields in MyStruct.

Additionally, in my system, native functions are wrapped like this:

; `quux` is defined in `some-lib` and returns an `int`
(let [fn- (com.sun.jna.Function/getFunction "some-lib" "quux")]
  (fn [& args]
    (.invoke fn- Integer (to-array args))))

The goal is to get a type to substitute for Integer above that will wrap MyStruct as a value.

The only resource that I've found covering this subject is this article, but it only discusses how to pass structs by reference.

Given that, here are the different approaches I've tried to take to solve this problem:

  1. Create a class that inherits from Structure, which is JNA's built-in mechanism for creating and using structs. Given the information on that page, I tried to create the following class using only Clojure:

    class MyStruct extends Structure implements Structure.ByValue {
        int foo;
        int bar;
        double baz;
    }
    

    deftype doesn't work for this scenario, since the class needs to inherit from the abstract class Structure, and gen-class doesn't work because the class needs to have the public non-static fields foo, bar and baz.

    From what I can tell, none of the standard Clojure Java interop facilities can create the above class.

  2. Create a class that inherits from Structure and override the struct field getter/setter methods. Since gen-class is (I believe) the only Clojure construct that allows for direct inheritance, and it doesn't support multiple public non-static fields, the next alternative is to simply not use fields at all. Looking at the Structure abstract class documentation, it seems like there's a concoction of overrides possible to use 'virtual' struct fields, such that it really gets and sets data from a different source (such as the state field from gen-class). Looking through the documentation, it seems like overriding readField, writeField, and some other methods may have the intended effect, but the I was unclear how to do this from reading the documentation, and I couldn't find any similar examples online.

  3. Use a different storage class. JNA has a myriad of classes for wrapping native types. I'm wondering if, rather than defining and using a Structure class, I could use another generic class that can take an arbitrary number of bits (like how Integer can hold anything that's 4 bits wide, regardless of what type the source 'actually' is). Is it possible to, for example, say that a function returns an array of bytes of length 16 (since sizeof(MyStruct) is 16)? What about wrapping a fixed-size array in a container class that implements NativeMapped? I couldn't find examples of how to do either.

by Kyle Lacy at July 29, 2014 05:01 AM

Shuffle a list of integers with Java 8 Streams API

I tried to translate the following line of Scala to Java 8 using the Streams API:

// Scala
util.Random.shuffle((1 to 24).toList)

To write the equivalent in Java I created a range of integers:

IntStream.range(1, 25)

I suspected to find a toList method in the stream API, but IntStream only knows the strange method:

collect(
  Supplier<R> supplier, ObjIntConsumer<R> accumulator, BiConsumer<R,R> combiner)

How can I shuffle a list with Java 8 Streams API?

by deamon at July 29, 2014 04:39 AM

Serialize and Deserialize scala enumerations or case objects using json4s

Suppose I have an enumeration or sealed group of case objects as follows:

  sealed abstract class Status
  case object Complete extends Status
  case object Failed extends Status
  case object Pending extends Status
  case object Unknown extends Status

or

  object Status extends Enumeration {
    val Complete, Failed, Pending, Unknown = Value
  }

What is the easiest way to create json formats for these so that I can very easily (programmatically) generate json formats for use in a custom JsonFormat factory method, such as the following, which works for all normal case classes, strings, collections, etc., but produces {} or {"name": null} for the above two types of enumerations?:

import org.json4s.DefaultFormats
import org.json4s.jackson.JsonMethods.parse
import org.json4s.jackson.Serialization
import org.json4s.jvalue2extractable
import org.json4s.string2JsonInput

trait JsonFormat[T] {
  def read(json: String): T
  def write(t: T): String
}

object JsonFormat {

  implicit lazy val formats = DefaultFormats

  def create[T <: AnyRef: Manifest](): JsonFormat[T] = new JsonFormat[T] {
    def read(json: String): T = parse(json).extract[T]
    def write(t: T): String = Serialization.write(t)
  }
}

by jonderry at July 29, 2014 03:53 AM

CompsciOverflow

Internet Works but sites other than google fb and gmail wont load? [on hold]

My internet is working well because i see the torrent download at the maximum bandwidth but the browsers aren't loading anything other than gmail google search results and facebook. I have to keep on refreshing the pages again and again to load them if they are other than the sites mentioned above.

Solutions Tried:Scanned with bitdefender, Kaspersky, Spybot, Microsoft Safety Scanner, Malwarebytes, SuperAntiSpyware, Reset all browsers

by EvilWarrior at July 29, 2014 03:50 AM

/r/compsci

Hey i'm interested in learning machine learning and image processing, but don't know where to get started

I'm really interested in getting into machine learning and image processing but don't really don't where to start. Looking around I haven't really found anything online that gives a satisfactory explanation. I would greatly appreciate it if everyone could give me pointers on things to should know. This can be anything from the various frameworks involved, level of math required, suggested programming languages and other things. I would also like it if people could suggest books and websites that I could look into. Thanks for your time!

submitted by jmwandu
[link] [1 comment]

July 29, 2014 03:47 AM

/r/clojure

Can someone walk me through creating a barebones web app with oauth2?

Hi fellas,

Between tackling this and some PHP related projects for work, I have been completely stuck on this personal goal of mine for weeks. My progress remains at successfully utilizing OAuth2 with Google via the command line - this involved copying and pasting information from the browser to the REPL. My goal is to have it all done seamlessly in the browser.

Ideally I'd like to do it with Facebook, but I gather if I was walked through the process with any platform then I'd be able to put 2 and 2 together to conquer the others.

There are no other local Clojure developers that I know of so it is really hard to make progress when I am stuck. Cheers.

edit

If it is any incentive, I can put together a tutorial after you walk me through it and attribute the tutorial to you.

submitted by hanzuna
[link] [comment]

July 29, 2014 03:22 AM

Wes Felter

Lobsters

/r/clojure

QuantOverflow

How can someone practice quantitative finance analysis skills outside of class/work? [on hold]

I am looking to find ways to practice/study the skills needed to succeed in quant finance. I am currently an Electrical Engineer working within the Energy Management Systems field, but am working on a Masters in Financial Math. I don't get exposed to quant modeling or anything like that in my current job. Outside of what I will learn in the Master's program what can I do to learn the core skills need to compete for jobs in quant finance (modeling, creating algo systems, etc)?

by bmwthree35 at July 29, 2014 01:40 AM

/r/compsci

Quit CS because I couldn't pass the high-level classes

I quit CS 2 years ago, and I'm finishing my degree in Civil Engineering next year.

Here's the thing - engineering was never my absolute love. CS was, and always will be. I love the field with a vengeance, even though I only have pretty awful memories at school to go along with it.

I struggled with CE, but I'm making my way through it even though I don't look forward to classes and most of it is boring. I did it under pressure to get a degree, that's all.

With CS, I looked forward to every class, did the work, and underperformed all the time. I constantly needed help with the coding, but it was the higher level theory that really nailed me in the coffin. Biological computing, game theory, advanced algorithms, methods, so on. And I love computer vision - couldn't deal with even a few initial weeks of curriculum before I had to stop. Nevermind the math classes. It was like being raped with an old splintery broomstick from the 60s.

So where am I now? I'm graduating soon, and all I think about is how I could go back and finish learning all the stuff I've always wanted to learn in CS. The material I dreamed about, that was so difficult and fast-paced I had to convince myself the whole field wasn't for me.

But it has to be. I want to be a computer scientist .I just don't know if I can.

I don't want to wait another 2 years and regre t not having gone back and figured out a way to attack my degree and finish it well.The emphasis being on learning. It's all I ever wanted, in the earnest.

Has anyone been in my shoes? Did anyone here love CS to death but just couldn't keep up with the curriculum?

To be fair, I was also at a college rated for its CS in the country and it was deathly difficult for everyone. But I still think i was doing something wrong. Just not sure what it was.

Was I not cut out for it?

submitted by canonau
[link] [15 comments]

July 29, 2014 01:38 AM

arXiv Logic in Computer Science

Using Flow Specifications of Parameterized Cache Coherence Protocols for Verifying Deadlock Freedom. (arXiv:1407.7468v1 [cs.DC])

We consider the problem of verifying deadlock freedom for symmetric cache coherence protocols. In particular, we focus on a specific form of deadlock which is useful for the cache coherence protocol domain and consistent with the internal definition of deadlock in the Murphi model checker: we refer to this deadlock as a system- wide deadlock (s-deadlock). In s-deadlock, the entire system gets blocked and is unable to make any transition. Cache coherence protocols consist of N symmetric cache agents, where N is an unbounded parameter; thus the verification of s-deadlock freedom is naturally a parameterized verification problem. Parametrized verification techniques work by using sound abstractions to reduce the unbounded model to a bounded model. Efficient abstractions which work well for industrial scale protocols typically bound the model by replacing the state of most of the agents by an abstract environment, while keeping just one or two agents as is. However, leveraging such efficient abstractions becomes a challenge for s-deadlock: a violation of s-deadlock is a state in which the transitions of all of the unbounded number of agents cannot occur and so a simple abstraction like the one above will not preserve this violation. In this work we address this challenge by presenting a technique which leverages high-level information about the protocols, in the form of message sequence dia- grams referred to as flows, for constructing invariants that are collectively stronger than s-deadlock. Efficient abstractions can be constructed to verify these invariants. We successfully verify the German and Flash protocols using our technique.

by <a href="http://arxiv.org/find/cs/1/au:+Sethi_D/0/1/0/all/0/1">Divjyot Sethi</a>, <a href="http://arxiv.org/find/cs/1/au:+Talupur_M/0/1/0/all/0/1">Muralidhar Talupur</a>, <a href="http://arxiv.org/find/cs/1/au:+Malik_S/0/1/0/all/0/1">Sharad Malik</a> at July 29, 2014 01:30 AM

An Optimal Game Theoretical Framework for Mobility Aware Routing in Mobile Ad hoc Networks. (arXiv:1407.7464v1 [cs.NI])

Selfish behaviors are common in self-organized Mobile Ad hoc Networks (MANETs) where nodes belong to different authorities. Since cooperation of nodes is essential for routing protocols, various methods have been proposed to stimulate cooperation among selfish nodes. In order to provide sufficient incentives, most of these methods pay nodes a premium over their actual costs of participation. However, they lead to considerably large overpayments. Moreover, existing methods ignore mobility of nodes, for simplicity. However, owing to the mobile nature of MANETs, this assumption seems unrealistic. In this paper, we propose an optimal game theoretical framework to ensure the proper cooperation in mobility aware routing for MANETs. The proposed method is based on the multi-dimensional optimal auctions which allows us to consider path durations, in addition to the route costs. Path duration is a metric that best reflects changes in topology caused by mobility of nodes and, it is widely used in mobility aware routing protocols. Furthermore, the proposed mechanism is optimal in that it minimizes the total expected payments. We provide theoretical analysis to support our claims. In addition, simulation results show significant improvements in terms of payments compared to the most popular existing methods.

by <a href="http://arxiv.org/find/cs/1/au:+Khaledi_M/0/1/0/all/0/1">Mehrdad Khaledi</a>, <a href="http://arxiv.org/find/cs/1/au:+Khaledi_M/0/1/0/all/0/1">Mojgan Khaledi</a>, <a href="http://arxiv.org/find/cs/1/au:+Rabiee_H/0/1/0/all/0/1">Hamidreza Rabiee</a> at July 29, 2014 01:30 AM

Parallelism-Aware Memory Interference Delay Analysis for COTS Multicore Systems. (arXiv:1407.7448v1 [cs.DC])

In modern Commercial Off-The-Shelf (COTS) multicore systems, each core can generate many parallel memory requests at a time. The processing of these parallel requests in the DRAM controller greatly affects the memory interference delay experienced by running tasks on the platform. In this paper, we model a modern COTS multicore system which has a nonblocking last-level cache (LLC) and a DRAM controller that prioritizes reads over writes. To minimize interference, we focus on LLC and DRAM bank partitioned systems. Based on the model, we propose an analysis that computes a safe upper bound for the worst-case memory interference delay. We validated our analysis on a real COTS multicore platform with a set of carefully designed synthetic benchmarks as well as SPEC2006 benchmarks. Evaluation results show that our analysis is more accurately capture the worst-case memory interference delay and provides safer upper bounds compared to a recently proposed analysis which significantly under-estimate the delay.

by <a href="http://arxiv.org/find/cs/1/au:+Yun_H/0/1/0/all/0/1">Heechul Yun</a> at July 29, 2014 01:30 AM

Critical Independent Sets of a Graph. (arXiv:1407.7368v1 [cs.DM])

Let $G$ be a simple graph with vertex set $V\left( G\right) $. A set $S\subseteq V\left( G\right) $ is independent if no two vertices from $S$ are adjacent, and by $\mathrm{Ind}(G)$ we mean the family of all independent sets of $G$.

The number $d\left( X\right) =$ $\left\vert X\right\vert -\left\vert N(X)\right\vert $ is the difference of $X\subseteq V\left( G\right) $, and a set $A\in\mathrm{Ind}(G)$ is critical if $d(A)=\max \{d\left( I\right) :I\in\mathrm{Ind}(G)\}$ (Zhang, 1990).

Let us recall the following definitions:

$\mathrm{core}\left( G\right) $ = $\bigcap$ {S : S is a maximum independent set}.

$\mathrm{corona}\left( G\right)$ = $\bigcup$ {S :S is a maximum independent set}.

$\mathrm{\ker}(G)$ = $\bigcap$ {S : S is a critical independent set}.

$\mathrm{diadem}(G)$ = $\bigcup$ {S : S is a critical independent set}.

In this paper we present various structural properties of $\mathrm{\ker}(G)$, in relation with $\mathrm{core}\left( G\right) $, $\mathrm{corona}\left( G\right) $, and $\mathrm{diadem}(G)$.

by <a href="http://arxiv.org/find/cs/1/au:+Levit_V/0/1/0/all/0/1">Vadim E. Levit</a>, <a href="http://arxiv.org/find/cs/1/au:+Mandrescu_E/0/1/0/all/0/1">Eugen Mandrescu</a> at July 29, 2014 01:30 AM

A Taxonomy and Survey on eScience as a Service in the Cloud. (arXiv:1407.7360v1 [cs.DC])

Cloud computing has recently evolved as a popular computing infrastructure for many applications. Scientific computing, which was mainly hosted in private clusters and grids, has started to migrate development and deployment to the public cloud environment. eScience as a service becomes an emerging and promising direction for science computing. We review recent efforts in developing and deploying scientific computing applications in the cloud. In particular, we introduce a taxonomy specifically designed for scientific computing in the cloud, and further review the taxonomy with four major kinds of science applications, including life sciences, physics sciences, social and humanities sciences, and climate and earth sciences. Our major finding is that, despite existing efforts in developing cloud-based eScience, eScience still has a long way to go to fully unlock the power of cloud computing paradigm. Therefore, we present the challenges and opportunities in the future development of cloud-based eScience services, and call for collaborations and innovations from both the scientific and computer system communities to address those challenges.

by <a href="http://arxiv.org/find/cs/1/au:+Zhou_A/0/1/0/all/0/1">Amelie Chi Zhou</a>, <a href="http://arxiv.org/find/cs/1/au:+He_B/0/1/0/all/0/1">Bingsheng He</a>, <a href="http://arxiv.org/find/cs/1/au:+Ibrahim_S/0/1/0/all/0/1">Shadi Ibrahim</a> at July 29, 2014 01:30 AM

Analysis of Timed and Long-Run Objectives for Markov Automata. (arXiv:1407.7356v1 [cs.LO])

Markov automata (MAs) extend labelled transition systems with random delays and probabilistic branching. Action-labelled transitions are instantaneous and yield a distribution over states, whereas timed transitions impose a random delay governed by an exponential distribution. MAs are thus a nondeterministic variation of continuous-time Markov chains. MAs are compositional and are used to provide a semantics for engineering frameworks such as (dynamic) fault trees, (generalised) stochastic Petri nets, and the Architecture Analysis & Design Language (AADL). This paper considers the quantitative analysis of MAs. We consider three objectives: expected time, long-run average, and timed (interval) reachability. Expected time objectives focus on determining the minimal (or maximal) expected time to reach a set of states. Long-run objectives determine the fraction of time to be in a set of states when considering an infinite time horizon. Timed reachability objectives are about computing the probability to reach a set of states within a given time interval. This paper presents the foundations and details of the algorithms and their correctness proofs. We report on several case studies conducted using a prototypical tool implementation of the algorithms, driven by the MAPA modelling language for efficiently generating MAs.

by <a href="http://arxiv.org/find/cs/1/au:+Guck_D/0/1/0/all/0/1">Dennis Guck</a>, <a href="http://arxiv.org/find/cs/1/au:+Hatefi_H/0/1/0/all/0/1">Hassan Hatefi</a>, <a href="http://arxiv.org/find/cs/1/au:+Hermanns_H/0/1/0/all/0/1">Holger Hermanns</a>, <a href="http://arxiv.org/find/cs/1/au:+Katoen_J/0/1/0/all/0/1">Joost-Pieter Katoen</a>, <a href="http://arxiv.org/find/cs/1/au:+Timmer_M/0/1/0/all/0/1">Mark Timmer</a> at July 29, 2014 01:30 AM

Price of Anarchy of Innovation Diffusion in Social Networks. (arXiv:1407.7319v1 [cs.GT])

There have been great efforts in studying the cascading behavior in social networks such as the innovation diffusion, etc. Game theoretically, in a social network where individuals choose from two strategies: A (the innovation) and B (the status quo) and get payoff from their neighbors for coordination, it has long been known that the Price of Anarchy (PoA) of this game is not 1, since the Nash equilibrium (NE) where all players take B (B Nash) is inferior to the one all players taking A (A Nash). However, no quantitative analysis has been performed to give an accurate upper bound of PoA in this game.

In this paper, we adopt a widely used networked coordination game setting [3] to study how bad a Nash equilibrium can be and give a tight upper bound of the PoA of such games. We show that there is an NE that is slightly worse than the B Nash. On the other hand, the PoA is bounded and the worst NE cannot be much worse than the B Nash. In addition, we discuss how the PoA upper bound would change when compatibility between A and B is introduced, and show an intuitive result that the upper bound strictly decreases as the compatibility is increased.

by <a href="http://arxiv.org/find/cs/1/au:+Chen_X/0/1/0/all/0/1">Xilun Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Wu_C/0/1/0/all/0/1">Chenxia Wu</a> at July 29, 2014 01:30 AM

Parameterized Model-Checking for Timed-Systems with Conjunctive Guards (Extended Version). (arXiv:1407.7305v1 [cs.LO])

In this work we extend the Emerson and Kahlon's cutoff theorems for process skeletons with conjunctive guards to Parameterized Networks of Timed Automata, i.e. systems obtained by an \emph{apriori} unknown number of Timed Automata instantiated from a finite set $U_1, \dots, U_n$ of Timed Automata templates. In this way we aim at giving a tool to universally verify software systems where an unknown number of software components (i.e. processes) interact with continuous time temporal constraints. It is often the case, indeed, that distributed algorithms show an heterogeneous nature, combining dynamic aspects with real-time aspects. In the paper we will also show how to model check a protocol that uses special variables storing identifiers of the participating processes (i.e. PIDs) in Timed Automata with conjunctive guards. This is non-trivial, since solutions to the parameterized verification problem often relies on the processes to be symmetric, i.e. indistinguishable. On the other side, many popular distributed algorithms make use of PIDs and thus cannot directly apply those solutions.

by <a href="http://arxiv.org/find/cs/1/au:+Spalazzi_L/0/1/0/all/0/1">Luca Spalazzi</a>, <a href="http://arxiv.org/find/cs/1/au:+Spegni_F/0/1/0/all/0/1">Francesco Spegni</a> at July 29, 2014 01:30 AM

Q-A: Towards the Solution of Usability-Security Tension in User Authentication. (arXiv:1407.7277v1 [cs.HC])

Users often choose passwords that are easy to remember but also easy to guess by attackers. Recent studies have revealed the vulnerability of textual passwords to shoulder surfing and keystroke loggers. It remains a critical challenge in password research to develop an authentication scheme that addresses these security issues, in addition to offering good memorability. Motivated by psychology research on humans' cognitive strengths and weaknesses, we explore the potential of cognitive questions as a way to address the major challenges in user authentication. We design, implement, and evaluate Q-A, a novel cognitive-question-based password system that requires a user to enter the letter at a given position in her answer for each of six personal questions (e.g. "What is the name of your favorite childhood teacher?"). In this scheme, the user does not need to memorize new, artificial information as her authentication secret. Our scheme offers 28 bits of theoretical password space, which has been found sufficient to prevent online brute-force attacks. Q-A is also robust against shoulder surfing and keystroke loggers. We conducted a multi-session in-lab user study to evaluate the usability of Q-A; 100% of users were able to remember their Q-A password over the span of one week, although login times were high. We compared our scheme with random six character passwords and found that login success rate in Q-A was significantly higher. Based on our results, we suggest that Q-A would be most appropriate in contexts that demand high security and where logins occur infrequently (e.g., online bank accounts).

by <a href="http://arxiv.org/find/cs/1/au:+Al_Ameen_M/0/1/0/all/0/1">Mahdi Nasrullah Al-Ameen</a> (1), <a href="http://arxiv.org/find/cs/1/au:+Haque_S/0/1/0/all/0/1">S M Taiabul Haque</a> (1), <a href="http://arxiv.org/find/cs/1/au:+Wright_M/0/1/0/all/0/1">Matthew Wright</a> (1) ((1) The University of Texas at Arlington, Arlington, TX, USA) at July 29, 2014 01:30 AM

Implementation and Abstraction in Mathematics. (arXiv:1407.7274v1 [cs.LO])

This manuscript presents a type-theoretic foundation for mathematics in which each type is associated with an equality relation in correspondence with the standard notions of isomorphism in mathematics. The main result an abstraction theorem stating that isomorphic objects are inter-substitutable in well-typed contexts.

by <a href="http://arxiv.org/find/cs/1/au:+McAllester_D/0/1/0/all/0/1">David McAllester</a> at July 29, 2014 01:30 AM

On Spectrum Sharing Between Energy Harvesting Cognitive Radio Users and Primary Users. (arXiv:1407.7267v1 [cs.NI])

This paper investigates the maximum secondary throughput for a rechargeable secondary user (SU) sharing the spectrum with a primary user (PU) plugged to a reliable power supply. The SU maintains a finite energy queue and harvests energy from natural resources and primary radio frequency (RF) transmissions. We propose a power allocation policy at the PU and analyze its effect on the throughput of both the PU and SU. Furthermore, we study the impact of the bursty arrivals at the PU on the energy harvested by the SU from RF transmissions. Moreover, we investigate the impact of the rate of energy harvesting from natural resources on the SU throughput. We assume fading channels and compute exact closed-form expressions for the energy harvested by the SU under fading. Results reveal that the proposed power allocation policy along with the implemented RF energy harvesting at the SU enhance the throughput of both primary and secondary links.

by <a href="http://arxiv.org/find/cs/1/au:+Shafie_A/0/1/0/all/0/1">Ahmed El Shafie</a>, <a href="http://arxiv.org/find/cs/1/au:+Ashour_M/0/1/0/all/0/1">Mahmoud Ashour</a>, <a href="http://arxiv.org/find/cs/1/au:+Khattab_T/0/1/0/all/0/1">Tamer Khattab</a>, <a href="http://arxiv.org/find/cs/1/au:+Mohamed_A/0/1/0/all/0/1">Amr Mohamed</a> at July 29, 2014 01:30 AM

Locating-dominating sets and identifying codes in graphs of girth at least 5. (arXiv:1407.7263v1 [math.CO])

Locating-dominating sets and identifying codes are two closely related notions in the area of separating systems. Roughly speaking, they consist in a dominating set of a graph such that every vertex is uniquely identified by its neighbourhood within the dominating set. In this paper, we study the size of a smallest locating-dominating set or identifying code for graphs of girth at least 5 and of given minimum degree. We use the technique of vertex-disjoint paths to provide upper bounds on the minimum size of such sets, and construct graphs who come close to meet these bounds.

by <a href="http://arxiv.org/find/math/1/au:+Balbuena_C/0/1/0/all/0/1">Camino Balbuena</a>, <a href="http://arxiv.org/find/math/1/au:+Foucaud_F/0/1/0/all/0/1">Florent Foucaud</a>, <a href="http://arxiv.org/find/math/1/au:+Hansberg_A/0/1/0/all/0/1">Adriana Hansberg</a> at July 29, 2014 01:30 AM

When Augmented Reality Meets Big Data. (arXiv:1407.7223v1 [cs.DB])

With computing and sensing woven into the fabric of everyday life, we live in an era where we are awash in a flood of data from which we can gain rich insights. Augmented reality (AR) is able to collect and help analyze the growing torrent of data about user engagement metrics within our personal mobile and wearable devices. This enables us to blend information from our senses and the digitalized world in a myriad of ways that was not possible before. AR and big data have a logical maturity that inevitably converge them. The tread of harnessing AR and big data to breed new interesting applications is starting to have a tangible presence. In this paper, we explore the potential to capture value from the marriage between AR and big data technologies, following with several challenges that must be addressed to fully realize this potential.

by <a href="http://arxiv.org/find/cs/1/au:+Huang_Z/0/1/0/all/0/1">Zhanpeng Huang</a>, <a href="http://arxiv.org/find/cs/1/au:+Hui_P/0/1/0/all/0/1">Pan Hui</a>, <a href="http://arxiv.org/find/cs/1/au:+Peylo_C/0/1/0/all/0/1">Christoph Peylo</a> at July 29, 2014 01:30 AM

TLS Proxies: Friend or Foe?. (arXiv:1407.7146v1 [cs.CR])

The use of TLS proxies to intercept encrypted traffic is controversial since the same mechanism can be used for both benevolent purposes, such as protecting against malware, and for malicious purposes, such as identity theft or warrantless government surveillance. To understand the prevalence and uses of these proxies, we build a TLS proxy measurement tool and deploy it via a Google AdWords campaign. We generate 2.9 million certificate tests and find that 1 in 250 TLS connections are proxied. The majority of these proxies appear to be benevolent, however we identify over 1,000 cases where three malware products are using this technology nefariously. We also find numerous instances of negligent and duplicitous behavior, some of which degrade security for users without their knowledge. To better understand user attitudes toward proxies, we conduct a survey of 1,261 users. The conflicting purposes of TLS proxies are also demonstrated in these findings, with users simultaneously comfortable with benevolent proxies but wary of attackers and government intrusion. Distinguishing these types of practices is challenging in practice, indicating a need for transparency and user opt-in.

by <a href="http://arxiv.org/find/cs/1/au:+ONeill_M/0/1/0/all/0/1">Mark O&#x27;Neill</a>, <a href="http://arxiv.org/find/cs/1/au:+Ruoti_S/0/1/0/all/0/1">Scott Ruoti</a>, <a href="http://arxiv.org/find/cs/1/au:+Seamons_K/0/1/0/all/0/1">Kent Seamons</a>, <a href="http://arxiv.org/find/cs/1/au:+Zappala_D/0/1/0/all/0/1">Daniel Zappala</a> at July 29, 2014 01:30 AM

Linear Intransitive Temporal Logic of Knowledge LTK_r, Decision Algorithms, Inference Rules. (arXiv:1407.7136v1 [cs.LO])

Our paper investigates the linear logic of knowledge and time LTK_r with reflexive intransitive time relation. The logic is defined semantically, -- as the set of formulas which are true at special frames with intransitive and reflexive time binary relation. The LTK_r -frames are linear chains of clusters connected by a reflexive intransitive relation $R_T$. Elements inside a cluster are connected by several equivalence relations imitating the knowledge of different agents. We study the decidability problem for formulas and inference rules. Decidability for formulas follows from decidability w.r.t. admissible inference rules.To study admissibility, we introduce some special constructive Kripke models useful for description of admissibility of inference rules. With a special technique of definable valuations we find an algorithm determining admissible inference rules in LTK_r. That is, we show that the logic LTK_r is decidable and decidable with respect to admissibility of inference rules.

by <a href="http://arxiv.org/find/cs/1/au:+Lukyanchuk_A/0/1/0/all/0/1">Alexandra Lukyanchuk</a>, <a href="http://arxiv.org/find/cs/1/au:+Rybakov_V/0/1/0/all/0/1">Vladimir Rybakov</a> at July 29, 2014 01:30 AM

Real-Time Bidding Benchmarking with iPinYou Dataset. (arXiv:1407.7073v1 [cs.GT])

Being an emerging paradigm for display advertising, Real-Time Bidding (RTB) drives the focus of the bidding strategy from context to users' interest by computing a bid for each impression in real time. The data mining work and particularly the bidding strategy development becomes crucial in this performance-driven business. However, researchers in computational advertising area have been suffering from lack of publicly available benchmark datasets, which are essential to compare different algorithms and systems. Fortunately, a leading Chinese advertising technology company iPinYou decided to release the dataset used in its global RTB algorithm competition in 2013. The dataset includes logs of ad auctions, bids, impressions, clicks, and final conversions. These logs reflect the market environment as well as form a complete path of users' responses from advertisers' perspective. This dataset directly supports the experiments of some important research problems such as bid optimisation and CTR estimation. To the best of our knowledge, this is the first publicly available dataset on RTB display advertising. Thus, they are valuable for reproducible research and understanding the whole RTB ecosystem. In this paper, we first provide the detailed statistical analysis of this dataset. Then we introduce the research problem of bid optimisation in RTB and the simple yet comprehensive evaluation protocol. Besides, a series of benchmark experiments are also conducted, including both click-through rate (CTR) estimation and bid optimisation.

by <a href="http://arxiv.org/find/cs/1/au:+Zhang_W/0/1/0/all/0/1">Weinan Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Yuan_S/0/1/0/all/0/1">Shuai Yuan</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_J/0/1/0/all/0/1">Jun Wang</a> at July 29, 2014 01:30 AM

Fast Spammer Detection Using Structural Rank. (arXiv:1407.7072v1 [cs.IR])

Comments for a product or a news article are rapidly growing and became a medium of measuring quality products or services. Consequently, spammers have been emerged in this area to bias them toward their favor. In this paper, we propose an efficient spammer detection method using structural rank of author specific term-document matrices. The use of structural rank was found effective and far faster than similar methods.

by <a href="http://arxiv.org/find/cs/1/au:+Kim_S/0/1/0/all/0/1">Seungyeon Kim</a>, <a href="http://arxiv.org/find/cs/1/au:+Park_H/0/1/0/all/0/1">Haesun Park</a>, <a href="http://arxiv.org/find/cs/1/au:+Lebanon_G/0/1/0/all/0/1">Guy Lebanon</a> at July 29, 2014 01:30 AM

Bar Recursion and Products of Selection Functions. (arXiv:1407.7046v1 [cs.LO])

We show how two iterated products of selection functions can both be used in conjunction with system T to interpret, via the dialectica interpretation and modified realizability, full classical analysis. We also show that one iterated product is equivalent over system T to Spector's bar recursion, whereas the other is T-equivalent to modified bar recursion. Modified bar recursion itself is shown to arise directly from the iteration of a different binary product of "skewed" selection functions. Iterations of the dependent binary products are also considered but in all cases are shown to be T-equivalent to the iteration of the simple products.

by <a href="http://arxiv.org/find/cs/1/au:+Escardo_M/0/1/0/all/0/1">Martin Escardo</a>, <a href="http://arxiv.org/find/cs/1/au:+Oliva_P/0/1/0/all/0/1">Paulo Oliva</a> at July 29, 2014 01:30 AM

Point degree spectra of represented spaces. (arXiv:1405.6866v2 [math.GN] UPDATED)

We introduce the point spectrum of a represented spaces as a substructure of the Medvedev degrees. The point spectrum is closely linked to isomorphism type of a space w.r.t. countably continuous maps, and via this, also with the dimension. Through this new connection between descriptive set theory and degree theory (as part of computability theory) we can answer several open questions.

As a result on the way, we prove that any admissible represented space with an effectively fiber-compact representation is already computably metrizable.

by <a href="http://arxiv.org/find/math/1/au:+Kihara_T/0/1/0/all/0/1">Takayuki Kihara</a>, <a href="http://arxiv.org/find/math/1/au:+Pauly_A/0/1/0/all/0/1">Arno Pauly</a> at July 29, 2014 01:30 AM

Mobile Computing in Digital Ecosystems: Design Issues and Challenges. (arXiv:1105.2458v3 [cs.NI] UPDATED)

In this paper we argue that the set of wireless, mobile devices (e.g., portable telephones, tablet PCs, GPS navigators, media players) commonly used by human users enables the construction of what we term a digital ecosystem, i.e., an ecosystem constructed out of so-called digital organisms (see below), that can foster the development of novel distributed services. In this context, a human user equipped with his/her own mobile devices, can be though of as a digital organism (DO), a subsystem characterized by a set of peculiar features and resources it can offer to the rest of the ecosystem for use from its peer DOs. The internal organization of the DO must address issues of management of its own resources, including power consumption. Inside the DO and among DOs, peer-to-peer interaction mechanisms can be conveniently deployed to favor resource sharing and data dissemination. Throughout this paper, we show that most of the solutions and technologies needed to construct a digital ecosystem are already available. What is still missing is a framework (i.e., mechanisms, protocols, services) that can support effectively the integration and cooperation of these technologies. In addition, in the following we show that that framework can be implemented as a middleware subsystem that enables novel and ubiquitous forms of computation and communication. Finally, in order to illustrate the effectiveness of our approach, we introduce some experimental results we have obtained from preliminary implementations of (parts of) that subsystem.

by <a href="http://arxiv.org/find/cs/1/au:+DAngelo_G/0/1/0/all/0/1">Gabriele D&#x27;Angelo</a>, <a href="http://arxiv.org/find/cs/1/au:+Ferretti_S/0/1/0/all/0/1">Stefano Ferretti</a>, <a href="http://arxiv.org/find/cs/1/au:+Ghini_V/0/1/0/all/0/1">Vittorio Ghini</a>, <a href="http://arxiv.org/find/cs/1/au:+Panzieri_F/0/1/0/all/0/1">Fabio Panzieri</a> at July 29, 2014 01:30 AM

LUNES: Agent-based Simulation of P2P Systems (Extended Version). (arXiv:1105.2447v3 [cs.DC] UPDATED)

We present LUNES, an agent-based Large Unstructured NEtwork Simulator, which allows to simulate complex networks composed of a high number of nodes. LUNES is modular, since it splits the three phases of network topology creation, protocol simulation and performance evaluation. This permits to easily integrate external software tools into the main software architecture. The simulation of the interaction protocols among network nodes is performed via a simulation middleware that supports both the sequential and the parallel/distributed simulation approaches. In the latter case, a specific mechanism for the communication overhead-reduction is used; this guarantees high levels of performance and scalability. To demonstrate the efficiency of LUNES, we test the simulator with gossip protocols executed on top of networks (representing peer-to-peer overlays), generated with different topologies. Results demonstrate the effectiveness of the proposed approach.

by <a href="http://arxiv.org/find/cs/1/au:+DAngelo_G/0/1/0/all/0/1">Gabriele D&#x27;Angelo</a>, <a href="http://arxiv.org/find/cs/1/au:+Ferretti_S/0/1/0/all/0/1">Stefano Ferretti</a> at July 29, 2014 01:30 AM

Adaptive Event Dissemination for Peer-to-Peer Multiplayer Online Games. (arXiv:1102.0720v3 [cs.NI] UPDATED)

In this paper we show that gossip algorithms may be effectively used to disseminate game events in Peer-to-Peer (P2P) Multiplayer Online Games (MOGs). Game events are disseminated through an overlay network. The proposed scheme exploits the typical behavior of players to tune the data dissemination. In fact, it is well known that users playing a MOG typically generate game events at a rate that can be approximated using some (game dependent) probability distribution. Hence, as soon as a given node experiences a reception rate, for messages coming from a given peer, which is lower than expected, it can send a stimulus to the neighbor that usually forwards these messages, asking it to increase its dissemination probability. Three variants of this approach will be studied. According to the first one, upon reception of a stimulus from a neighbor, a peer increases its dissemination probability towards that node irrespectively from the sender. In the second protocol a peer increases only the dissemination probability for a given sender towards all its neighbors. Finally, the third protocol takes into consideration both the sender and the neighbor in order to decide how to increase the dissemination probability. We performed extensive simulations to assess the efficacy of the proposed scheme, and based on the simulation results we compare the different dissemination protocols. The results confirm that adaptive gossip schemes are indeed effective and deserve further investigation.

by <a href="http://arxiv.org/find/cs/1/au:+DAngelo_G/0/1/0/all/0/1">Gabriele D&#x27;Angelo</a>, <a href="http://arxiv.org/find/cs/1/au:+Ferretti_S/0/1/0/all/0/1">Stefano Ferretti</a>, <a href="http://arxiv.org/find/cs/1/au:+Marzolla_M/0/1/0/all/0/1">Moreno Marzolla</a> at July 29, 2014 01:30 AM

StackOverflow

Core Foundations, where is CF_PRIVATE defintion

I'm trying to compile Core Foundations Lite Build 855.14 on FreeBSD 10. The Compiler can't find the type CF_PRIVATE.

example...

    ./CFInternal.h:124:1: error: unknown type name 'CF_PRIVATE'
    CF_PRIVATE CFIndex __CFActiveProcessorCount();

    ./CFInternal.h:176:1: error: unknown type name 'CF_PRIVATE'
    CF_PRIVATE void _CFLogSimple(int32_t lev, char *format, ...);

    ./CFInternal.h:176:12: error: expected identifier or '('
    CF_PRIVATE void _CFLogSimple(int32_t lev, char *format, ...);

Does anyone know where it is defined? Ive Searched Most of the files included in the CF_855.14 folder 'IE CFInternal.h, CFBase.h, ect', Ive Looked on Google and i'm not finding much information on it. I looked at opencflite-476.19.0 and their CFInternal.h has no CF_PRIVATE type, while a search for it in CF_855.14's version of CFInternal.h has CF_PRIVATE 31 times. Any Help Would Be appreciated and have a good day/night.

by 2trill2spill at July 29, 2014 01:28 AM

Future of Try both recover

Which way is the best to recover Future[Try[A]] type with type A?

val future: Future[String] = ???
val tr: Try[String] = ???

future recover {case _ => "recovered"}
tr recover {case _ => "recovered"}

val futureTry: Future[Try[String]] = ???

How to recover both with "recovered" ?

by Artem at July 29, 2014 01:06 AM

DataTau

StackOverflow

Scala Map values getting printed as float and integer randomly [duplicate]

I am printing value of scala Map in Java class like below

Option<Object> data = JSON.parseFull((String)zkClient.readData(statePath));    
scala.collection.immutable.Map<String, Object> leaderInfo = scala.collection.immutable.Map<String, Object>) data.get();
    System.out.println(leaderInfo.toString();

But I am surprised that sometimes the value comes as floating point number

controller_epoch -> 6.0, leader -> 1.0, version -> 1.0, leader_epoch -> 4.0, isr -> List(1.0)

and sometimes as Integer.

controller_epoch -> 6, leader -> 1, version -> 1, leader_epoch -> 4, isr -> List(1)

I want the data to be always parsed as Integer. Is there any way to do that?

by rrs120486 at July 29, 2014 12:58 AM

What is the functionality of the <% operator?

Recently, I looked at the example of implicit chain, implicit def foo[C3 <% C](c: C). I think I am confused about the difference between <% and (implicit c : C).

If I write implicit def bToC[C3 <: C](c: C)(implicit c3 : C3), it gives a compilation error, but why is that, implicit def should be in the scope?

Edit:

Can someone explain why

implicit def aToB[A1 : A](a: A1)(implicit ev: Int => A1): B = new B(a.n, a.n)

and

implicit def aToB[A1 <: A](a: A1)(implicit ev: Int => A1): B = new B(a.n, a.n)

are not working ?

Many thanks in advance

by Cloud tech at July 29, 2014 12:44 AM

Planet Theory

A 3-factor approximation algorithm for a Minimum Acyclic Agreement Forest on k rooted, binary phylogenetic trees

Authors: Asish Mukhopadhyay, Puspal Bhabak
Download: PDF
Abstract: Phylogenetic trees are leaf-labelled trees, where the leaves correspond to extant species (taxa), and the internal vertices represent ancestral species. The evolutionary history of a set of species can be explained by more than one phylogenetic tree, giving rise to the problem of comparing phylogenetic trees for similarity. Various distance metrics, like the subtree prune-and-regraft (SPR), tree bisection reconnection (TBR) and nearest neighbour interchange (NNI) have been proposed to capture this similarity. The distance between two phylogenetic trees can also be measured by the size of a Maximum Agreement Forest (MAF) on these trees, as it has been shown that the rooted subtree prune-and-regraft distance is 1 less than the size of a MAF. Since computing a MAF of minimum size is an NP-hard problem, approximation algorithms are of interest. Recently, it has been shown that the MAF on k(>=2) trees can be approximated to within a factor of 8. In this paper, we improve this ratio to 3. For certain species, however, the evolutionary history is not completely tree-like. Due to reticulate evolution two gene trees, though related, appear different, making a phylogenetic network a more appropriate representation of reticulate evolution. A phylogenetic network contains hybrid nodes for the species evolved from two parents. The number of such nodes is its hybridization number. It has been shown that this number is 1 less than the size of a Maximum Acyclic Agreement Forest (MAAF). We show that the MAAF for k(>= 2) phylogenetic trees can be approximated to within a factor of 3.

July 29, 2014 12:41 AM

Faster and Simpler Sketches of Valuation Functions

Authors: Keren Cohavi, Shahar Dobzinski
Download: PDF
Abstract: We present fast algorithms for sketching valuation functions. Let $N$ ($|N|=n$) be some ground set and $v:2^N\rightarrow \mathbb R$ be a function. We say that $\tilde v:2^N\rightarrow \mathbb R$ is an $\alpha$-sketch of $v$ if for every set $S$ we have that $\frac {v(S)} {\alpha} \leq \tilde v(S) \leq v(S)$ and $\tilde v$ can be described in $poly(n)$ bits.

Goemans et al. [SODA'09] showed that if $v$ is submodular then there exists an $\tilde O(\sqrt n)$-sketch that can be constructed using polynomially many value queries (this is the best possible, as Balcan and Harvey [STOC'11] show that no submodular function admit an $n^{\frac 1 3 - \epsilon}$-sketch). Based on their work, Balcan et al. [COLT'12] and Badanidiyuru et al. [SODA'12] show that if $v$ is subadditive then there exists an $\tilde O(\sqrt n)$-sketch that can be constructed using polynomially many demand queries. All previous sketches are based on complicated geometric constructions. The first step in their constructions is proving the existence of a good sketch by finding an ellipsoid that ``approximates'' $v$ well (this is done by applying John's theorem to ensure the existence of an ellipsoid that is ``close'' to the polymatroid that is associated with $v$). The second step is showing this ellipsoid can be found efficiently, and this is done by repeatedly solving a certain convex program to obtain better approximations of John's ellipsoid.

In this paper, we give a much simpler, non-geometric proof for the existence of good sketches, and utilize the proof to obtain much faster algorithms that match the previously obtained approximation bounds. Specifically, we provide an algorithm that finds $\tilde O(\sqrt n)$-sketch of a submodular function with only $\tilde O(n^\frac{3}{2})$ value queries, and an algorithm that finds $\tilde O(\sqrt n)$-sketch of a subadditive function with $O(n)$ demand and value queries.

July 29, 2014 12:41 AM

On Polynomial Kernelization of $\mathcal{H}$-free Edge Deletion

Authors: N. R. Aravind, R. B. Sandeep, Naveen Sivadasan
Download: PDF
Abstract: For a set of graphs $\mathcal{H}$, the \textsc{$\mathcal{H}$-free Edge Deletion} problem asks to find whether there exist at most $k$ edges in the input graph whose deletion results in a graph without any induced copy of $H\in\mathcal{H}$. In \cite{cai1996fixed}, it is shown that the problem is fixed-parameter tractable if $\mathcal{H}$ is of finite cardinality. However, it is proved in \cite{cai2013incompressibility} that if $\mathcal{H}$ is a singleton set containing $H$, for a large class of $H$, there exists no polynomial kernel unless $coNP\subseteq NP/poly$. In this paper, we present a polynomial kernel for this problem for any fixed finite set $\mathcal{H}$ of connected graphs and when the input graphs are of bounded degree. We note that there are \textsc{$\mathcal{H}$-free Edge Deletion} problems which remain NP-complete even for the bounded degree input graphs, for example \textsc{Triangle-free Edge Deletion}\cite{brugmann2009generating} and \textsc{Custer Edge Deletion($P_3$-free Edge Deletion)}\cite{komusiewicz2011alternative}. When $\mathcal{H}$ contains $K_{1,s}$, we obtain a stronger result - a polynomial kernel for $K_t$-free input graphs (for any fixed $t> 2$). We note that for $s>9$, there is an incompressibility result for \textsc{$K_{1,s}$-free Edge Deletion} for general graphs \cite{cai2012polynomial}. Our result provides first polynomial kernels for \textsc{Claw-free Edge Deletion} and \textsc{Line Edge Deletion} for $K_t$-free input graphs which are NP-complete even for $K_4$-free graphs\cite{yannakakis1981edge} and were raised as open problems in \cite{cai2013incompressibility,open2013worker}.

July 29, 2014 12:41 AM

A Parallel Branch and Bound Algorithm for the Maximum Labelled Clique Problem

Authors: Ciaran McCreesh, Patrick Prosser
Download: PDF
Abstract: The maximum labelled clique problem is a variant of the maximum clique problem where edges in the graph are given labels, and we are not allowed to use more than a certain number of distinct labels in a solution. We introduce a new branch-and-bound algorithm for the problem, and explain how it may be parallelised. We evaluate an implementation on a set of benchmark instances, and show that it is consistently faster than previously published results, sometimes by four or five orders of magnitude.

July 29, 2014 12:41 AM

Tight lower bound for the channel assignment problem

Authors: Arkadiusz Socala
Download: PDF
Abstract: We study the complexity of the Channel Assignment problem. A major open problem asks whether Channel Assignment admits an $O(c^n)$-time algorithm, for a constant $c$ independent of the weights on the edges. We answer this question in the negative i.e. we show that there is no $2^{o(n\log n)}$-time algorithm solving Channel Assignment unless the Exponential Time Hypothesis fails. Note that the currently best known algorithm works in time $O^*(n!) = 2^{O(n\log n)}$ so our lower bound is tight.

July 29, 2014 12:41 AM

Assigning channels via the meet-in-the-middle approach

Authors: Łukasz Kowalik, Arkadiusz Socała
Download: PDF
Abstract: We study the complexity of the Channel Assignment problem. By applying the meet-in-the-middle approach we get an algorithm for the $\ell$-bounded Channel Assignment (when the edge weights are bounded by $\ell$) running in time $O^*((2\sqrt{\ell+1})^n)$. This is the first algorithm which breaks the $(O(\ell))^n$ barrier. We extend this algorithm to the counting variant, at the cost of slightly higher polynomial factor.

A major open problem asks whether Channel Assignment admits a $O(c^n)$-time algorithm, for a constant $c$ independent of $\ell$. We consider a similar question for Generalized T-Coloring, a CSP problem that generalizes \CA. We show that Generalized T-Coloring does not admit a $2^{2^{o\left(\sqrt{n}\right)}} {\rm poly}(r)$-time algorithm, where $r$ is the size of the instance.

July 29, 2014 12:41 AM

A note on multipivot Quicksort

Authors: Vasileios Iliopoulos
Download: PDF
Abstract: We analyse a generalisation of the Quicksort algorithm, where k uniformly at random chosen pivots are used for partitioning an array of n distinct keys. Specifically, the expected cost of this scheme is obtained, under the assumption of linearity of the cost needed for the partition process. The integration constants of the expected cost are computed using Vandermonde matrices.

July 29, 2014 12:41 AM

Directed Multicut with linearly ordered terminals

Authors: Robert F. Erbacher, Trent Jaeger, Nirupama Talele, Jason Teutsch
Download: PDF
Abstract: Motivated by an application in network security, we investigate the following "linear" case of Directed Mutlicut. Let $G$ be a directed graph which includes some distinguished vertices $t_1, \ldots, t_k$. What is the size of the smallest edge cut which eliminates all paths from $t_i$ to $t_j$ for all $i < j$? We show that this problem is fixed-parameter tractable when parametrized in the cutset size $p$ via an algorithm running in $O(4^p p n^4)$ time.

July 29, 2014 12:41 AM

Online Learning and Profit Maximization from Revealed Preferences

Authors: Kareem Amin, Rachel Cummings, Lili Dworkin, Michael Kearns, Aaron Roth
Download: PDF
Abstract: We consider the problem of learning from revealed preferences in an online setting. In our framework, each period a consumer buys an optimal bundle of goods from a merchant according to her (linear) utility function and current prices, subject to a budget constraint. The merchant observes only the purchased goods, and seeks to adapt prices to optimize his profits. We give an efficient algorithm for the merchant's problem that consists of a learning phase in which the consumer's utility function is (perhaps partially) inferred, followed by a price optimization step. We also consider an alternative online learning algorithm for the setting where prices are set exogenously, but the merchant would still like to predict the bundle that will be bought by the consumer for purposes of inventory or supply chain management. In contrast with most prior work on the revealed preferences problem, we demonstrate that by making stronger assumptions on the form of utility functions, efficient algorithms for both learning and profit maximization are possible, even in adaptive, online settings.

July 29, 2014 12:41 AM

From Quantum Query Complexity to State Complexity

Authors: Shenggen Zheng, Daowen Qiu
Download: PDF
Abstract: State complexity of quantum finite automata is one of the interesting topics in studying the power of quantum finite automata. It is therefore of importance to develop general methods how to show state succinctness results for quantum finite automata. One such method is presented and demonstrated in this paper. In particular, we show that state succinctness results can be derived out of query complexity results.

July 29, 2014 12:40 AM

StackOverflow

How can i check the file existence in ansible

I am downloading the file with wget from ansible.

  - name: Download Solr
    shell: chdir={{project_root}}/solr wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip

but I only want to do that if zip file does not exist in that location. Currently the system is downloading it every time.

by user1994660 at July 29, 2014 12:40 AM

Planet Clojure

The Proper Pronunciation of Clojure's Assoc

Sometimes I pause before talking to someone about Clojure code. Not because I am unsure of the code, but because I am unsure of how to pronounce the code. The particular code in question is Clojure’s assoc. I have heard it pronounced two ways. One is “assosh”, the other is “assok”. So, to determine it, I decided to conduct a scientific poll of the Clojure community.

I posted the poll on twitter to the Cojure community who follow me. The control group poll was not viewed by those who do not follow me, and/or, are not on twitter.

The results were startling.

https://c1.staticflickr.com/3/2928/14585605540_6d0ce6169f_n.jpg">

  • assosh – 10
  • assok – 8
  • assose – 2
  • Jeremy – 1
  • asoaksh – 1

The community is clearly deeply divided on this important question.

After sifting through the raw data, I remembered my statistical analysis and threw out the extremes.

https://c2.staticflickr.com/6/5559/14792182503_12aa682260_n.jpg">

The conclusion was still a stark reality.

We do not as a community know how to pronounce assoc.

Solution

I can only see one way forward. We must address this as a community. I propose that the community documentation of Clojure Docs and Grimoire modify their sites to include audio pronunciation like this.

Remember, I’m pulling for you. We are all in this together.

by Carin Meier at July 29, 2014 12:27 AM

StackOverflow

ansible-playbook extra-vars option passing '=' inside of string

Based on the documentation for ansible-playbook, is there a way to pass the character = inside one of the --extra-vars option. e.g. ansible-playbook my.yml -i hosts --extra-vars "params='aminjam name=amin'" it looks like the = character doesn't get passed down to the playbook. Is there a trick to do that?

by aminjam at July 29, 2014 12:14 AM

Lobsters

StackOverflow

Are Iteratees safe for managing resources?

Suppose I were reading from an InputStream.

How I would normally do it:

val inputStream = ...
try {
    doStuff(inputStream)
} finally {
    inputStream.close()
}

Whether or not doStuff throws an exception, we will close the InputStream.


How I would do it with iteratees:

val inputStream ...
Enumerator.fromStream(inputStream)(Iteratee.foreach(doStuff))

Will the InputStream be closed (even if doStuff throws an exception)?

A little test:

val inputStream = new InputStream() { // returns 10, 9, ... 0, -1
    private var i = 10
    def read() = {
       i = math.max(0, i) - 1
       i
    }
    override def close() = println("closed") // looking for this
}
Enumerator.fromStream(inputStream)(Iteratee.foreach(a => 1 / 0)).onComplete(println)

We only see:

Failure(java.lang.ArithmeticException: / by zero)

The stream was never closed. Replace 1 / 0 with 1 / 1 and you'll see that the stream closes.

Of course, I could maintain a reference to the original stream and close it in case of failure, but AFAIK the idea of using iteratees is creating composable iteration without having to do that.


  1. Is this expected behavior?

  2. Is there a way to use iteratees so the resources are always disposed correctly?

by Paul Draper at July 29, 2014 12:01 AM

HN Daily

Planet Theory

Vertex 2-coloring without monochromatic cycles

Authors: Michał Karpiński
Download: PDF
Abstract: In this paper we study a problem of vertex two-coloring of undirected graph such that there is no monochromatic cycle of given length. We show that this problem is hard to solve. We give a proof by presenting a reduction from variation of satisfiability (SAT) problem. We show nice properties of coloring cliques with two colors which plays pivotal role in the reduction construction.

July 29, 2014 12:00 AM

DMVP: Foremost Waypoint Coverage of Time-Varying Graphs

Authors: Eric Aaron, Danny Krizanc, Elliot Meyerson
Download: PDF
Abstract: We consider the Dynamic Map Visitation Problem (DMVP), in which a team of agents must visit a collection of critical locations as quickly as possible, in an environment that may change rapidly and unpredictably during the agents' navigation. We apply recent formulations of time-varying graphs (TVGs) to DMVP, shedding new light on the computational hierarchy $\mathcal{R} \supset \mathcal{B} \supset \mathcal{P}$ of TVG classes by analyzing them in the context of graph navigation. We provide hardness results for all three classes, and for several restricted topologies, we show a separation between the classes by showing severe inapproximability in $\mathcal{R}$, limited approximability in $\mathcal{B}$, and tractability in $\mathcal{P}$. We also give topologies in which DMVP in $\mathcal{R}$ is fixed parameter tractable, which may serve as a first step toward fully characterizing the features that make DMVP difficult.

July 29, 2014 12:00 AM

PTAS for Minimax Approval Voting

Authors: Jaroslaw Byrka, Krzysztof Sornat
Download: PDF
Abstract: We consider Approval Voting systems where each voter decides on a subset to candidates he/she approves. We focus on the optimization problem of finding the committee of fixed size k minimizing the maximal Hamming distance from a vote. In this paper we give a PTAS for this problem and hence resolve the open question raised by Carragianis et al. [AAAI'10]. The result is obtained by adapting the techniques developed by Li et al. [JACM'02] originally used for the less constrained Closest String problem. The technique relies on extracting information and structural properties of constant size subsets of votes.

July 29, 2014 12:00 AM

July 28, 2014

QuantOverflow

When shorting a stock, do you pay current market price or the best (lowest) available ask price? [on hold]

A stocks last sale price of 2.5 dollars. Say I am shorting 1000 shares of a stock, there's an ask price of 3 dollars for 500 shares. Theres an ask of 3.5 dollars for 250 shares, and an ask price of 4 dollars for the final 250.

Do the stocks get borrowed at a price of 2.5$ each? Is the price of the stock at the first ask price get used for all 1000 shares? Or does each part get filled separately in the same way going long works?

If someone could explain the pricing mechanism for short sales I would appreciate it.

Also, how often do you find that you are unable to make a short sale?

by Ryan Sinclair at July 28, 2014 11:34 PM

Lambda the Ultimate Forum

No Instruction Set Computer NISC

How might your language design change in light of this sort of vertical integration?

Abstract: General-purpose processors are often unable to exploit the parallelism inherent to the software code. This is why additional hardware accelerators are needed to enable meeting the performance goals. NISC (No-Instruction-Set Computer) is a new approach to hardware-software co-design based on automatic generation of special-purpose processors. It was designed to be self-sufficient and it eliminates the need for other processors in the system. This work describes a method for expanding the application domain of the NISC processor to general-purpose processor systems with large amounts of processor-specific legacy code. This coprocessor-based approach allows application acceleration by utilizing both instruction-level and task-level parallelism by migrating performance-critical parts of an application to hardware without the need for changing the rest of the program code. For demonstration of this concept, a NISC coprocessor WISHBONE interface was designed. It was implemented and tested in a WISHBONE system based on Altium’s TSK3000A general-purpose RISC soft processor and an analytical model was proposed to provide the means to evaluate its efficiency in arbitrary systems.

July 28, 2014 11:03 PM

DataTau

StackOverflow

Difference between some & (first (filter in clojure?

I have seen suggested that some can be used instead of (first (filter but I'm confused by a discrepancy in how they work. Can any one please explain why this doesn't yield the same result?

(some (comp #{:fu} :id) [{:id :fu :baz :bar}])
> :fu

(first (filter (comp #{:fu} :id) [{:id :fu :baz :bar}]))
> {:id :fu, :baz :bar}

Is there any other, idiomatic and less verbose, way to do (first (filter i.e. get the first item satisfying a predicate?

by 4ZM at July 28, 2014 10:50 PM

Merging maps in scala by key and sum or append the value

im looking for a way to merge maps. i can write code every time i need to merge maps by their keys but the problem is that the manipulation on the value is different all the time (for strings , lists of int)

is their a library for this issue ?

for example my input is :

  //value is int - need to sum the values
  val example1 = Map("a" -> 1 , "b" -> 1 , "c" -> 7)
  val example2 = Map("a" -> 1 , "e" -> 5 , "f" -> 2)

  //value is list - need to append 
  val example1 = Map("a" -> List(1) , "b" -> List(3) , "c" -> List(2))
  val example2 = Map("a" -> List(4) , "e" -> List(1) , "f" -> List(1))

  //value is string - nned to append
  val example1 = Map("a" -> "asd" , "b" -> "efd" , "c" ->  "sdf")
  val example2 = Map("a" -> "ads" , "e" -> "sdfds" , "f" -> "czxc2")

by The Best at July 28, 2014 10:31 PM

/r/netsec

StackOverflow

How to run leiningen tests having > (greater than) symbol in their name under Windows?

I can't refer to tests having -> in its names because > symbol is treated as stdout redirection.

Under PowerShell 4.0 I run current stable Leiningen 2.4.2 with parameters being escaped by --%:

PS> lein test :only --% my-project.core/foo->bar

I see only stderr in the console, but as a side-effect I get bar file created with the following content:

lein test user

Ran 0 tests containing 0 assertions.
0 failures, 0 errors.

by Alexey at July 28, 2014 10:21 PM

Idea/eclipse shortcuts for Scala. Jump from variable to class which this variable implements? From function to class which it's result implements?

I just start learning Scala after Java. I like to use [ctrl+B] / [ctrl+click] shortcuts in idea(and eclipse), to quickly navigate and see variable of which class in front of me. But in Scala (when declare of variable class or function returning is unnecessary in most cases) this navigation become slower and harder.

Is there some shortcut in IntelliJ Idea to jump from variable to class which this variable implements? From function to class of it's result implements?

I thing the same question for eclipse users are actual too.

by vadim_shb at July 28, 2014 10:17 PM

/r/netsec

StackOverflow

scala java scala/App error NoClassDefFoundError

I am new to both Java and Scala. I have created the jar file and wants it to be executed on JVM. I am not sure what is the problem here. Kindly help me. Please find the error log and command called

My-mini:~ DC$ export JVM_ARGS="-XX:+CMSClassUnloadingEnabled -XX:PermSize=512M -  XX:MaxPermSize=1024M"
My-mini:~ DC$ java $JVM_ARGS -jar /Users/DC/Desktop/eclipse/Workspace/FTDataProject/target/scala-2.11/ftdataproject_2.11-1.0.jar MainApp.scala Germany 20140728
Exception in thread "main" java.lang.NoClassDefFoundError: scala/App
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClassCond(ClassLoader.java:637)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
    at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
    at com.DC.FTDataParser.MainApp.main(MainApp.scala)
Caused by: java.lang.ClassNotFoundException: scala.App
    at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
    ... 13 more

by user3341078 at July 28, 2014 10:12 PM

Lobsters

What would be the optimal size for Lobsters?

When communities get too big, then they fall apart. Today, I was just thinking to myself: “What would happen if Lobsters had a user limit”? That got me poking around the Internet.

Lobsters has 3293 registered users today: http://lobste.rs/u

Here’s an interesting series of articles about online community size: http://www.lifewithalacrity.com/2009/03/power-laws.html

What do you all think community size? Obviously, too small is bad. But what size is too big?

by jm at July 28, 2014 10:08 PM

StackOverflow

Install newest version of thrift via apt-get or another way using ansible?

I'm interested in installing thrift 0.9.2 on my ubuntu system so that I'll have access to the fullcamel. It seems like the repository I'm using has only version 0.9.0. I know I could build from sources, but I need an easier-to-automate way to do this, because I need to be able to install this version as an ansible task. Currently, I have the following ansible tasks:

- name: Install libthrift-java=0.9.0-1ubuntu1
  apt: name=libthrift-java=0.9.0-1ubuntu1 state=present

- name: Install thrift-compiler=0.9.0-3
  apt: name=thrift-compiler=0.9.0-3 state=present

by jonderry at July 28, 2014 09:58 PM

Compilation error including clj-time in project

I have included [clj-time "0.8.0"] in my project.clj. I then refer to clj-time in my namespace like so:

(ns school-finder.tasks
  (:require [clj-time.core :as t]))

However when I try and run the project I get the following compilation error:

Exception in thread "main" java.lang.IllegalArgumentException: No single method: second of interface: clj_time.core.DateTimeProtocol found for function: second of protocol: DateTimeProtocol, compiling:(clj_time/coerce.clj:146:64)

What am I doing wrong?

by David Collie at July 28, 2014 09:56 PM

Lobsters

QuantOverflow

Gamma vs. Volatility Risk

Original Question: What is the link between Gamma and the Volatility Risk?

It leads me to ask:

- What is the Volatility Risk definition and what are the good practices to measure it?

Thinking about that question, all I could figure out liking with it is this:

Consider a market $(S^0, S)$ composed of one non-risky asset $S^0$ and one risky asset $S$. The interest rate in this market is $r$ (supposed constant just to simplify. Also, consider an adapted square integrable process $\sigma$ and suppose that $S$ follows $$S_t= S_0 + \int_0^t r S_u ~du +\int_0^t \sigma_u S_u dW_u \quad , t \geq 0$$ under the risk-neutral probability. Let's suppose a trader evaluate the price $v$ of an option of maturity $T$ by fixing $\sigma_t=\bar{\sigma}(t,S_t)$, a function of local volatility. Then, I know that the coverage error is given by (one can show it by a simple aaplication of Feyman-Kac theorem and Itô's Lemma) \begin{align} \text{Err} = V_T- v(T, S_T) &= \int_0 ^T e^{r(T-t)} (\bar{\sigma}(t,S_t) - \sigma_t) \partial^2_x v(t,S_t)dt \\&=\int_0 ^T e^{r(T-t)} (\bar{\sigma}(t,S_t) - \sigma_t) \Gamma(t,S_t)dt,\end{align} where $V = (V_t)_{0 \leq t \leq T}$ is the replicating portfolio and $v(t,x)$ is the intrinsic value of the option at time $t \in [0,T]$ and spot $S_t=x$.

So, we conclude that if:

  • $\Gamma \geq 0$ (i.e. a convex price): an over-estimation of $\bar{\sigma}(t,S_t)$ secures a gain, and an under-estimation of $\bar{\sigma}(t,S_t)$ secures a loss.
  • $\Gamma\leq 0$ (i.e. a concave price): an under-estimation of $\bar{\sigma}(t,S_t)$ secures a gain, and an over-estimation of $\bar{\sigma}(t,S_t)$ secures a loss.
  • $\Gamma \approx 0$ (i.e. neutral Gamma hedging): the hedging is weakly sensitive to realized volatility.

Am I going in the right direction to answer that?

I would appreciate any advice. Thanks in advance.

by Paul at July 28, 2014 09:26 PM

Portland Pattern Repository

DataTau

StackOverflow

Building latest Spark Snapshot in Intellij gives MesosSchedulerBackend compilation errors

I ran

sbt gen-idea

And then I opened the newly created IJ project. Syntax highlighting is working fine - a good sign. But there are a handful of errors occurring:

C:\apps\incubator-spark\core\src\main\scala\org\apache\spark\executor\MesosExecutorBackend.scala
Error:(256, 35) type mismatch;
 found   : org.apache.mesos.protobuf.ByteString
 required: com.google.protobuf.ByteString
      .setData(ByteString.copyFrom(task.serializedTask))
                                  ^
Error:(119, 35) type mismatch;
 found   : org.apache.mesos.protobuf.ByteString
 required: com.google.protobuf.ByteString
      .setData(ByteString.copyFrom(createExecArg()))
                                  ^
C:\apps\incubator-spark\core\src\main\scala\org\apache\spark\scheduler\cluster\mesos\MesosSchedulerBackend.scala
Error:(44, 35) type mismatch;
 found   : org.apache.mesos.protobuf.ByteString
 required: com.google.protobuf.ByteString
      .setData(ByteString.copyFrom(data))
                                  ^

Note that the build IS working on the command line via

sbt compile

Anyone out there building Spark with Intellij have any suggestions?

enter image description here

by javadba at July 28, 2014 08:52 PM

CompsciOverflow

Worst-case recursion depth for guessing numbers in a Sudoku puzzle when certain polynomial-time deduction rules are used

So for an arbitrary $n^2 \times n^2$ Sudoku puzzle, there are rules for inferring values and auxillary information in cells that run in polynomial time, e.g. using the filled in values to check to see if only one number is possible for a cell, using the filled in values to check if only one cell in a given row/column/square can contain a particular number, eliminating a number as a possibility for a row/column outside of a square if the number must lie in the row/column within the square, (and vice-versa for eliminating possibilities within a square), and taking advantage of the fact that if there are $k$ cells in a row/column/square that can only contain a particular subset of $k$ numbers, then no other cells in the row/column/square can contain those numbers. Presumably there are other polynomial time rules too.

The rule/strategy I want to exclude from this list is guessing a fixed number of cells' values, and checking to see if a contradiction is found or a complete valid puzzle is obtained. So, if we consider a certain reasonably rich set of polynomial time rules except for that one, what is the worst-case known recursion depth for how many cells' values need to be guessed simultaneously in order to make progress in a puzzle? Is this known for classical $n = 3$ for a reasonably rich certain set of polynomial time rules (e.g. the rules I listed above), and more interestingly for general $n$?

by user2566092 at July 28, 2014 08:39 PM

/r/netsec

StackOverflow

Scala abstract types in classes within objects

If I do this:

object Parent {
    class Inner extends Testable { type Self <: Inner }
    def inner = new Inner()
}

object Child {
    class Inner extends Parent.Inner { type Self <: Inner }
    def inner = new Inner()
}

trait Testable {
    type Self
    def test[T <: Self] = {}
}

object Main {
    // this works
    val p: Parent.Inner = Child.inner
    // this doesn't
    val h = Parent.inner
    h.test[Child.Inner]
}

I get this error:

error: type arguments [Child.Inner] do not conform to method test's type parameter bounds [T <: Main.h.Self]
    h.test[Child.Inner]

Why does this error when I my Self type is Parent.Inner and Child.Inner <: Parent.Inner?


And if I change type Self <: Inner to type Self = Inner and then override type Self = Inner, I get this error:

overriding type Self in class Inner, which equals Parent.Inner;
 type Self has incompatible type
    class Inner extends Parent.Inner { override type Self = Inner }

by mdenton8 at July 28, 2014 08:34 PM

Sbt 0.13 "configuration not public" for depending on test configuration of dependency

There is project called common and the other projectX

Project common has some dependencies in the test scope that I want to see on the test classpath in the projectX which has common as dependency.

therefore in the projectX build.sbt dependency has configuration:

libraryDependencies ++= Seq(
  "org" %% "common" % "0.1" % "compile->compile;test->test"
)

Mapping is done according to sbt documentation

but when it is run test:compile it fails with following error

[error] (*:update) sbt.ResolveException: unresolved dependency: org#common_2.10;0.1: configuration not public in org#common_2.10;0.1: 'test'. It was required from org#projectX_2.10;0.0.1-SNAPSHOT test

All I want is having test scoped dependencies in project common visible on the test classpath in the project projectX.

by Marek at July 28, 2014 08:23 PM

What can be recovered by Akka Future recover?

I'm curious under which situation can the recover function in Future recover a thrown exception? I'm using Akka Actor and Future together:

Here is where I made the future call:

implicit val timeout = Timeout(5.seconds) //yes, I already have this line.

val response = (ActorA ? someMessage(someStuff))
                    .mapTo[TransOk]
                    .map(message => (OK, message.get))
                    .recover{
                    case e => (BadRequest, e.getMessage)
                  }

I'm sending ActorA and then map the result into TransOK class, and at last I add .recover{}.

Then this is the ActorA's method:

case someMessage(stuff) =>
      //the exception being thrown here is not captured by Future.recover() method
      //why!?
      val id = if (some.canFind(stuff)) doSomething() 
               else throw new Exception("ERROR ERROR!")

      val result: Try[SomeDBType] = DAL.db.withSession { implicit session =>
        Try(DB.findStuff(stuff))
      }

      result match {
        case Success(content) => sender ! TransOk(content)
        case Failure(ex) => throw ex //let it escalate
      }

The interesting part is: the first exception is not captured by .recover(). So under what circumstance will recover be able to capture an exception? I thought it covers all exceptions happening within the methods being invoked.

by Wind Dweller at July 28, 2014 08:21 PM

/r/compsci

StackOverflow

How to add test-package to available sources in Play?

I have several mock-classes in my test-folder of my Play Java project (see anatomy here).

I have also prefixed all classes in the app- and test-folders with my domain, however that is allowed in Play 2 and works fine. My structure is now like this:

app
 └ my.domain
   └ conf
   └ controllers
   └ models
   └ views
build.sbt
conf
 └ application.conf
 └ routes             → controller-package for routes definitions adapted to my.domain
test
 └ my.domain
   └ mock             → I want to use sources/classes in here in my app!

My problem is: I want to reference classes in test/my.domain.mock in my GlobalSettings in my app-folder. But if I do that, Play says cannot find symbol.

I managed to get rid of errors within IntelliJ Idea simply by adding the test-folder to the sources of my app in the module-settings, but I don't know how to do this so that Play recognizes it too. I guess I would have to alter my build.sbt file, I just have now clue how. I'd appreciate any advice!

EDIT

By request, this is my current GlobalSettings-class:

public class Global extends GlobalSettings {

    private final Injector INJECTOR = createInjector();

    @Override
    public void onStart(Application application) {
        super.onStart(application);
    }

    @Override
    public <A> A getControllerInstance(Class<A> controllerClass) throws Exception {
        return INJECTOR.getInstance(controllerClass);
    }

    private static Injector createInjector() {
        return Guice.createInjector(new AbstractModule() {

            @Override
            protected void configure() {
                bind(UrlGenerator.class).to(ProductiveUrlGenerator.class).in(Singleton.class);
                // this is the problem: since I have another MockGlobal (manually inserted as a fake-application parameter) in my test-folder, I don't really need the mocked classes here
                // but for this service in particular, I want to make sure that it's not even used in dev-mode, only productive, but as long as my MockService is in the test-folder I can't access it here
                bind(ImportantService.class).to(Play.isProd() ? ProductiveService.class : MockService.class).in(Singleton.class);
            }
        });
    }
}

Like I commented in the code, the MockService.class is in test/my.domain.mock and therefore I can't access it in my GlobalSettings in the app-folder. I do have another MockGlobal-class binding all the other mock-classes in my test-folder for test-runs, and of course within the same folder there's access to my mock-classes, so until now that was fine. I then used the mocked global like this in my tests:

fakeApplication(inMemoryDatabase(), mockGlobalInstance)

I just tried moving the whole mock-package in my test-folder to the app-folder, leaving only the tests in the test-folder but not the mock-classes. That seems to work like you said, I just thought it would be cleaner to keep them separate from my productive classes in the app-folder.

PS: I'm using Guice not only for tests, but also for staying flexible regarding which services are used (easy switching of modules).

by Blacklight at July 28, 2014 08:09 PM

Lobsters

DataTau

Planet Clojure

2048 in ClojureScript by Ádám Peresztegi

Experiences of a ClojureScript rookie while building the 2048 game. The prezi: http://prezi.com/paumq5aq5zkc/2048-in-clojurescript/ The repo with the sources: https://github.com/flash42/adad The leiningen template mentioned: https://github.com/flash42/single-page-cljs-template

by Clojure Budapest at July 28, 2014 07:59 PM

Lobsters

QuantOverflow

Cost of stock issue? [on hold]

I just can't figure this out. Every formula I find assumes the dividend grows by a percent, not by a fixed amount. The problem: AYZ Company is planning on issuing common stock. Bankers have determined that the stock will be offered at 50 dollars per share and that a dividend of 2 dollars will be paid in one year. It is anticipated that there will a consistent growth in dividends of 4 dollars annually. Assuming no flotation cost, what will be the cost of this issue of stock? Thanks in advance!

by Andy H at July 28, 2014 07:56 PM

StackOverflow

Specifying unusual repository layouts in ~/.sbt/repositories file

The sbt documentation covers the basic format of the repositories file. It seems to be mostly specified in the launcher documentation (for sbt.boot.properties) at http://www.scala-sbt.org/release/docs/Launcher-Configuration.html:

[repositories]
  repo-name: URL, ivypattern

Which works fine for most repositories. Unfortunately, I have an unusually laid-out repository that I configure in sbt scala code by calling Resolver.file.ivys(patternToIvies).artifacts(patternToArtifacts), where the two have slightly different locations.

Is there some way to achieve that flexibility in the repositories file? I'd rather not republish all the artifacts in the unusual repository to a more sanely structured one just yet.

by Myserious Dan at July 28, 2014 07:52 PM

Why is this invalid Scala?

I'm working with abstract types, and I'm wondering why this is invalid:

class A {}
class B extends A {}

class X {type T = A}
class Y extends X {override type T = B}

Seeing as B <: A, why can't I assign B to T?

I get this error:

overriding type T in class X, which equals A;
 type T has incompatible type
class Y extends X {override type T = B}

Any help would be appreciated.

by mdenton8 at July 28, 2014 07:50 PM

CompsciOverflow

Is "Fill-in" word puzzle NP-complete?

So "fill-in" is a game where an $n \times n$ board is given with certain cells removed, and certain remaining cells are filled in with letters. Then a list of words is given. The problem is to fill in the board with all the given words (using each word once and only once), respecting the letters that are already filled in beforehand. Is this an NP-complete problem? It's certainly in NP because giving a valid filling in of words is a certificate when a solution exists.

by user2566092 at July 28, 2014 07:50 PM

StackOverflow

How to implement response code generically

I am making a ElasticSearch request from using it's Java client API, from Scala code. user can request for nested aggregations along with topHits documents etc. Number of aggregations (including how deep they are in the aggregation hierarchy) is totally dynamic and upto the user how s/he creates an aggregation request. Given dynamic nature of the request, response structure will also be changing. What's the best way to retrieve/parse the JSON response?

Following is a sample aggregations request. again request is dynamic and next time user may decide to run aggregation on totally different set of fields.

"aggregations": {
    "top_makes": {
        "buckets": [
            {
                "key": "toyota",
                "doc_count": 129,
                "avg_length": {
                    "value": 57.002
                },
                avg_year : {
                    "value" : 2008
                },
                "top_models": {
                    "buckets": [
                        {
                            "key": "corolla",
                            "doc_count": 30,
                            "top_res": {
                                "hits": {
                                    "total": 30,
                                    "max_score": 1,
                                    "hits": [
                                        {
                                            "_index": "cars",
                                            "_type": "car",
                                            "_id": "85",
                                            "_score": 1,
                                            "_source": {

                                                "make": "Toyota",
                                                "color": "Yellow",
                                                "year": 2010
                                            }
                                        }, 

(sample) client code for above structure may look like below. Again this code will not work for a different set of aggregations structure. So response code should be dynamic as well. below println are for illustration purposes. eventually those values should be injected to prepare some type of case class response structure.

val topModels: Terms = response.getAggregations().get("top_makes")
val buckets = topModels.getBuckets().asScala
 buckets.foreach { b =>
  println("key -> " + b.getKey())
  println("docCount -> " + b.getDocCount())
  val topModels = b.getAggregations().getAsMap().asScala
  topModels.foreach {
    case (m, agg: Terms) =>
      val modelB = agg.getBuckets().asScala
      modelB.foreach { m =>
        println("..key -> " + m.getKey())
        println("..docCount -> " + m.getDocCount())
        val topRes = m.getAggregations().getAsMap().asScala
        topRes.foreach {
          case (h, a: TopHits) =>
            val hits = a.getHits().getHits()
            hits.foreach { r =>
              println(r.getSource())

            }
          case (h,a:InternalAvg) => println(" Avg length "+a.getValue() )

        }
      }

  }

}

by user2066049 at July 28, 2014 07:23 PM

Lobsters

StackOverflow

JVM Gotchas, especially for Clojure

I remember I used to work at a company that couldn't run their JVM software on the OpenJDK JVM. They had to use the Oracle JVM. (Full disclosure: they were writing in groovy/grails.)

But I look at a lot of other JVM applications, and they seem to work fine on both JVMs. The OpenJDK JVM seems to be a solid implementation.

Being a Clojure enthusiast, I want to be able to code for both JVMs.

So, specifically:

  1. What are some common "gotchas" which, if you were targeting one JVM, you would have to be careful about when writing for a different JVM?
  2. Are there any language specific pitfalls, especially when it comes to clojure?
  3. When writing a clojure application, is there any common pitfalls in targeting both JVMs?

by djhaskin987 at July 28, 2014 07:06 PM

Fefe

Lobsters

StackOverflow

REPL returns RDD values but SBT won't compile

When running the below method from a fresh spark shell REPL session everything works fine. However when I try to compile the class containing this method I get the following errors

Error:(21, 50) value values is not a member of org.apache.spark.rdd.RDD[(Long, org.apache.spark.mllib.recommendation.Rating)]
val training = ratings.filter(x => x._1 < 6).values.repartition(numPartitions).persist
                                             ^
Error:(22, 65) value values is not a member of org.apache.spark.rdd.RDD[(Long, org.apache.spark.mllib.recommendation.Rating)]
val validation = ratings.filter(x => x._1 >= 6 && x._1 < 8).values.repartition(numPartitions).persist
                                                            ^
Error:(23, 47) value values is not a member of org.apache.spark.rdd.RDD[(Long, org.apache.spark.mllib.recommendation.Rating)]
val test = ratings.filter(x => x._1 >= 8).values.persist
                                          ^

In both cases I'm using Spark 1.0.1 The code itself is as follows.

def createDataset(ratings: RDD[Tuple2[Long,Rating]]): List[RDD[Rating]] = {

    val training = ratings.filter(x => x._1 < 6).values.repartition(numPartitions).persist
    val validation = ratings.filter(x => x._1 >= 6 && x._1 < 8).values.repartition(numPartitions).persist
    val test = ratings.filter(x => x._1 >= 8).values.persist
    val numTraining = training.count
    val numValidation = validation.count
    val numTest = test.count

    println(" Number Of Training ::: " + numTraining + " numValidation ::: " + numValidation + " ::: " + numTest)
    List(training,validation,test)
  }

It is taken from the MLLib tutorial (Adapted slightly) , no idea whats going wrong.

by steve at July 28, 2014 06:57 PM

Lobsters

"First-class 'Statements'": Looking at IO as data, through a Haskell case study

A look at Haskell’s approach of handling the execution and sequencing of IO as regular data structures, and not as special syntactical constructs.

Comments

by jle at July 28, 2014 06:51 PM

QuantOverflow

Usage of Bollinger bands

I looked through several sources on Bollinger bands and I do not see clear recipes of their usage. Wikipedia says "The use of Bollinger Bands varies widely among traders. " QSE discussion seems also says "Just like everyone else that's been down this path, you'll have to prove this stuff to yourself. "

Question Would you be so kind to share with me your experience - how (and when) do you use them ? what are the outcomes? and what idea stands behind it ?


Let me say what I understand - if market is in horizontal trend, i.e. we can think of it as price = Const + noise. Then it is reasonable to expect that if in some moment price is quite big, then it it will return to mean - so we go short at big price and sell in future, when it returns to trend(=Const).

However if we have positive trend: price(t) = A*t+B + noise, it very much depends on how fast noise is changing, how big is A, and volatility of noise. Since if trend is go ups very quickly even if the price is far from trend and it will return to trend some time after at the moment of return it will be much high than it was before (because of trend), so if you go short - you will lose.

by Alexander Chervov at July 28, 2014 06:50 PM

StackOverflow

Partially specify type parameter in Scala?

  def apply[T, LP <: ViewGroupLayoutParams[_, TSpinner[T]]]()(implicit context: android.content.Context, defaultLayoutParam: TSpinner[T] => LP): TSpinner[T] = {
    val v = new TSpinner[T]
    v.<<.parent.+=(v)
   v
}

Is it possible to only give one parameter?

val v = new TSpinner[T]()

Because normally, without paramter T, other paramters is all implicit + inferred

by molikto at July 28, 2014 06:44 PM

QuantOverflow

Chaikin Money Flow Persistence Formula

I am trying to create an approximation of the Accumulation/Distribution Rating using the Chaikin Money Flow Persistance indicator. I have the Chaikin Money Flow Formula as below, could anyone assist me to extend it to the Chaikin Money Flow Persistence Formula

Step 1 ((Close – Low) – (High – Close)/ (High – Low)) * Volume   
Step 2 21 Day Average of Step1 (Daily MF) / 21 Day Average of Volume

Any ideas are welcome, any better approaches are welcome too.

by Avagut at July 28, 2014 06:41 PM

CompsciOverflow

Categorizing scanned documents by searching for a special text [on hold]

I have thousands of scanned images and a limited number of categories. Each image sits in a category. Images are text documents and their category is detected by document title or a text found in all images in that category. Which libraries and algorithms should I use to search for a text in an image?

Indeed, I need to give the application an array of pattern images mapping to their category and application should search for this patterns in images and select the proper category.

by user16948 at July 28, 2014 06:40 PM

StackOverflow

Scala - mkString and getOrElse("")

Is there any difference between

abc.mkString

and

abc.getOrElse("")

when abc is an Option[String]?

by Matwell at July 28, 2014 06:35 PM

UnixOverflow

ZFS: Trying to remove top-level drive from Zpool

I have a zpool called storage that contains a five device raidz1 array.

Today I went and bought another 3TB device and put it in my enclosure. However, instead of creating a new pool and adding that device to it, I made a mistake and added it to my existing storage pool.

Now I have a top-level device that I want to remove called sdg (that's the new drive). Everytime I try to remove it I get:

cannot remove sdg: only inactive hot spares, cache, top-level, or log devices can be removed.

So how do I remove this device now? if this device fails, my entire pool will be unavailable. I'm thinking I should go buy another drive and at least it will be mirrored but I just can't believe there isn't a proper way to do this.

This is my status dump:

pool: storage
state: ONLINE
scan: scrub canceled on Wed Jul 23 17:26:08 2014

config:

 NAME                                 STATE     READ WRITE CKSUM
 storage                              ONLINE       0     0     0
   raidz1-0                           ONLINE       0     0     0
     ata-ST3000DM001-1CH166_Z1F1PYM6  ONLINE       0     0     0
     ata-ST3000DM001-1CH166_W1F24CSC  ONLINE       0     0     0
     ata-ST3000DM001-1CH166_W1F2372R  ONLINE       0     0     0
     ata-ST3000DM001-1CH166_W1F24BTK  ONLINE       0     0     0
     ata-ST3000DM001-1CH166_Z1F2KKLW  ONLINE       0     0     0
   sdg                                ONLINE       0     0     0

errors: No known data errors

by chronic at July 28, 2014 06:27 PM

/r/netsec

StackOverflow

How to pass values to an anonymous function, that is referenced with a map literal?

I was wondering if anybody knew any concise ways (if possible) to pass values to this anonymous functions 'x' parameter value?

(def Holder { :add-values (fn [x] (* x x)) }) 

Also how could I use the same method, to apply values to this anonymous functions 'y' parameter?

{:another-function (fn [y] (* y y))} 

Thanks.

by geem7n at July 28, 2014 06:24 PM

How to create different types of number sequences using Scala's Fibionacci?

How to create various types of number sequences based on previous values, like in Scala's Fibonacci Stream example?

x4, /2, e.g. => 10 40 20 80 40 160

Test

test("numbersequence") {
  assert(Calculation.numbersequence(10, 40, 20, 80) === 160)
}

Main

def numbersequence(a: Int, b: Int, c: Int, d: Int) : Int = {
  lazy val s: Stream[Int] = a #:: s.scanLeft(b)(_+_)
  s(5)
}

Could the same approach be used to create such number sequences or should another approach be used?

by utrecht at July 28, 2014 06:23 PM

Lobsters

/r/scala

/r/emacs

package wanted: emacs template for package.json

I guess I could do this with any templating system (yasnippet?) but has anyone already done it?

What do I want? I just want to open a new package.json file and have a template inserted and to be able to complete the various fields, add optional fields and then save it.

submitted by nicferrier
[link] [9 comments]

July 28, 2014 06:07 PM

DragonFly BSD Digest

BSDNow 047: DES Challenge IV

I missed this last week because I was on the road: BSDNow 047 is up, titled DES Challenge IV, has some followup on recent topics like pf in FreeBSD and the recent OpenBSD hackathon, plus an interview of Dag-Erling Smørgrav.

by Justin Sherrill at July 28, 2014 06:04 PM

BSDTalk 243: Ingo Schwarze

It’s all multimedia day here, as BSDTalk 243 is also out with 16 minutes of conversation with Ingo Schwarze about mandoc.  Mandoc is the man replacement in OpenBSD and built-but-not-yet-used in DragonFly.  ‘man replacement’ is probably an oversimplification.

by Justin Sherrill at July 28, 2014 06:04 PM

Lobsters

What are you working on this week?

It’s Monday, so it is time for our weekly “What are you working on?” thread. Please share links and tell us about your current project. Do you need feedback, proofreading, collaborators?

by zhemao at July 28, 2014 05:59 PM

/r/emacs

Help a newbie with emacs code folding, please.

Hello, I am a recent Vim convert to Emacs, got the evil mode and learning quickly, but one thing just doesn't work: code folding through the hideshow minor mode.

I want to make "{{{" and "}}}" act as markers for folds (the "marker" foldmethod of Vim), and the wiki says that one just has to add an element to the corresponding var. I do that in my .emacs, here's the result:

hs-special-modes-alist is a variable defined in `hideshow.el'. Its value is ((haskell-mode "{{{" "}}}" nil nil nil) (c-mode "{" "}" "/[*/]" nil nil) (c++-mode "{" "}" "/[*/]" nil nil) (bibtex-mode ("@\\S(*\\(\\s(\\)" 1)) (java-mode "{" "}" "/[*/]" nil nil) (js-mode "{" "}" "/[*/]" nil)) 

You can see that the option for haskell-mode is there with "{{{" and "}}}", but it doesn't work - the folds are just not found.

What am I doing wrong? Are there any better options for code folding in emacs? Thanks in advance.

submitted by aicubierre
[link] [4 comments]

July 28, 2014 05:53 PM

StackOverflow

how do I get a ZMQ Router to raise an error if it is busy?

I've got a REQ -> ROUTER -> [DEALER,DEALER... DEALER] setup going where REQ is a client, ROUTER is a queue and the DEALER sockets are workers that process data and send it back to ROUTER which sends it back to REQ. Working fine when there are enough DEALERs to handle the work. But if I slow down the DEALERs the ROUTER will never tell me that it's getting more work than it can handle.

The docs say:

ROUTER sockets do have a somewhat brutal way of dealing with messages they can't send anywhere: they drop them silently. It's an attitude that makes sense in working code, but it makes debugging hard. The "send identity as first frame" approach is tricky enough that we often get this wrong when we're learning, and the ROUTER's stony silence when we mess up isn't very constructive.

Since ØMQ v3.2 there's a socket option you can set to catch this error: ZMQ_ROUTER_MANDATORY. Set that on the ROUTER socket and then when you provide an unroutable identity on a send call, the socket will signal an EHOSTUNREACH error.

I'm honestly not sure if that's the same problem that I'm seeing. Stony silence sure matches what I'm seeing.

Here's the code for the setup:

var argsToString, buildSocket, client, q;

buildSocket = function(desc, socketType, port) {
  var socket;
  log("creating socket: " + (argsToString(Array.apply(null, arguments))));
  socket = zmq.socket(socketType);
  socket.identity = "" + desc + "-" + socketType + "-" + process.pid + "-" + port;
  return socket;
};

argsToString = function(a) {
  return a.join(', ');
};

client = buildSocket("client", 'req', clientPort);

q = buildSocket("q", "router", qPort);

q.setsockopt(zmq.ZMQ_ROUTER_MANDATORY, 1);

q.on('error', function() {
  return log('router error ' + argsToString(Array.apply(null, arguments)));
});

I can post more code if needed. The issue is that when the REQ socket sends 10 messages in a second but the DEALERs take 2 seconds to do their work the ROUTER just ignores incoming messages, regardless of ZMQ_ROUTER_MANDATORY. I've sent 1000s of messages and never seen an error (.on 'error') thrown from any of the sockets.

There's talk of ZMQ_HWM out there, but the node driver doesn't seem to support it for DEALERs or ROUTERs.

How can I manage a ROUTER that runs out of places to send messages to?

by jcollum at July 28, 2014 05:34 PM

/r/netsec

CompsciOverflow

Adding a node between two others, minimizing its maximum distance to any other node

We are given an undirected graph weighted with positive arc lengths and a distinguished edge $(a,b)$ in the graph. The problem is to replace this edge by two edges $(a,c)$ and $(c,b)$ where $c$ is a new node, such that the length of the path $(a,c,b)$ is equal to the inital length of $(a,b)$, and such that the choice of the length of $(a,c)$ minimizes the maximum distance of node $c$ to any other node of the graph. Is there any graph-theory based algorithm that can solve such problem rather than brute force?

Actually, the existence of the original edge $(a,b)$ is not essential. Only the end nodes $a$ and $b$, and the length matter.

For example we have a graph:

1 2 10
2 3 10
3 4 1
4 1 5

In every line the first two values indicate the node number, and the 3rd one is the corresponding edge value. Now I want to find a new node $c$ between nodes 1 and 2. The answer gives a node $c$ at distance 2 to node 1 (and distance 8 to node 2). The distance of node $c$ is 2 to node 1, 8 to node 2, 1+5+2=8 to node 3, and 5+2=7 to node 4. The maximum distance is 8, which is minimal for all possible choices of the length of $(a,c)$.

by user3153970 at July 28, 2014 05:27 PM

StackOverflow

Exclude test suite in ScalaTest (from maven)

One of the test suites in a given module takes potentially hours to run. What is the way to configure scalatest/maven plugin to exclude one or more suites?

As a little background: the following command includes a suite

mvn -DwildcardSuites=<Comma separated list of wildcard suite names to execute> test

by javadba at July 28, 2014 05:25 PM

TheoryOverflow

Is generalized pigeonhole search known to be no harder than PPP?

Consider the TFNP search problem

Given a positive integer $t$ in unary, positive integers $M$ and $N$ (in binary), and a
function from $\{0\hspace{.02 in},\hspace{-0.04 in}1,\hspace{-0.03 in}2\hspace{.02 in},\hspace{-0.03 in}3,...,\hspace{-0.02 in}M\hspace{-0.04 in}-\hspace{-0.04 in}2\hspace{.02 in},\hspace{-0.02 in}M\hspace{-0.04 in}-\hspace{-0.05 in}1\hspace{-0.02 in}\}$ to $\{0\hspace{.02 in},\hspace{-0.04 in}1,\hspace{-0.03 in}2\hspace{.02 in},\hspace{-0.03 in}3,...,\hspace{-0.02 in}N\hspace{-0.05 in}-\hspace{-0.04 in}2\hspace{.02 in},\hspace{-0.02 in}N\hspace{-0.05 in}-\hspace{-0.05 in}1\hspace{-0.02 in}\}$ (as a circuit),
find $\: \operatorname{min}(t,\hspace{-0.04 in}\lceil M/N\hspace{.02 in}\rceil) \:$ distinct inputs that map to the same output.

.


That problem is clearly hard for the class PPP. $\:$ Is that problem known to be in PPP?

by Ricky Demer at July 28, 2014 05:10 PM

Lobsters

StackOverflow

IntelliJ IDEA: Cannot import SBT project

I'm completely new to development using Play or IntelliJ for that matter. I've created a simple HelloWorld application using Activator, and this is an sbt project.

I've been trying to import this to IntelliJ and this is the screen I'm stuck at: https://www.dropbox.com/s/we1a4a3184sojvb/Screenshot%202014-07-24%2016.57.11.png

In almost all tutorials I've been through online, I've seen people using an sbt option on the import screen. I've installed the SBT plugin as well, but that hasn't helped. I've restarted IntelliJ several times to no avail.

Where am I going wrong? I'm running 13.1.4 with the SBT plugin installed.

by Ashesh at July 28, 2014 04:59 PM

Planet Scala

Mobile Enterprise Integration with Scala, MongoDB and Swagger

Check out my post on Enterprise Integration and some of the tools we have built for it using MongoDB, Scala and Swagger:

Enterprise Integration for Mobile Applications

by Gregg Carrier (noreply@blogger.com) at July 28, 2014 04:52 PM

StackOverflow

Scalding: parsing comma-separated data with header

I have data in format:

"header1","header2","header3",...
"value11","value12","value13",...
"value21","value22","value23",...
....

What is the best way to parse it in Scalding? I have over 50 columns altogether, but I am only interested in some of them. I tried importing it with Csv("file"), but that doesn't work.

The only solution that comes to mind is to parse it manually with TextLine and disregard the line with offset == 0. But I'm sure there must be a better solution.

by Savage Reader at July 28, 2014 04:47 PM

How to chain Future[\/[A,B]] in scala?

How I can do a for comprehension with the data of type Future[\/[String,Int]]

Here is a starting point, which doesn't compile.

import scala.concurrent.{ExecutionContext,future,Future}
import scalaz._
import Scalaz._
import ExecutionContext.Implicits.global

def calculateStuff(i:Int):Future[\/[String,Int]] = future{\/-(i)}

for {
   v1Either <- calculateStuff(1)
   v1Int <- v1Either
   v2Either < calculateStuff(v1Int)
   v2Int <- v2Either
   v3Either <- calculateStuff(v2Int)
   v3Int <- v3Either
} yield {
   v1Int + v2Int + v3Int
}

Note: calculateStuff is just an example, there will be actually different functions, each depending on the result of the previous.

by Jhonny Everson at July 28, 2014 04:46 PM

Lobsters

AWS

AWS Week in Review - July 21, 2014

Let's take a quick look at what happened in AWS-land last week:

Monday, July 21
Tuesday, July 22
Wednesday, July 23
Thursday, July 24
Friday, July 25

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at July 28, 2014 04:13 PM

High Scalability

The Great Microservices vs Monolithic Apps Twitter Melee

Once upon a time a great Twitter melee was fought for the coveted title of Consensus Best Way to Structure Systems. The competition was between Microservices and Monolithic Apps. 

Flying the the logo of Microservices, from a distant cloud covered land, is the Kingdom of Netflix, whose champion was Sir Adrian Cockcroft (who has pledged fealty to another). And for the Kingdom of ThoughtWorks we have Sir Sam Newman as champion.

Flying the logo of the Monolithic App is champion Sir John Allspaw, from the fair Kingdom of Etsy.

Knights from the Kingdom of Digital Ocean and several independent realms filled out the list.

To the winner goes a great prize: developer mindshare and the favor of that most fickle of ladies, Lady Luck.

May the best paradigm win.

The opening blow was wielded by the highly ranked Sir Cockcroft, a veteran of many tournaments:

by Todd Hoff at July 28, 2014 03:56 PM

CompsciOverflow

Ant colony optimization for continuous functions

I am trying to do optimization of a voice activity detection function, which is a function with continuous parameters. This is easily accomplished with genetic algorithms, simulated annealing, and tabu search, but I'm somewhat confused on how to accomplish this with Ant Colony Optimization (ACO).

From what I've read, ACO is mostly used for solving problems that can be formulated as a graph. I've searched for resources relating to multiple parameter function optimization, but the closest thing I found was this article for a single parameter on a continuous function and this long paper with no pseudocode which is contained in this PHD thesis. Are there any resources (websites or books) for accomplishing multiple parameter continuous function optimization with ACO that involve an implementation example?

Alternatively, is the key here to discretize the continuous inputs? If so, what methods exist to do this in a way that works well with ACO?

by Seanny123 at July 28, 2014 03:53 PM

/r/netsec

/r/emacs

ELI5 Recursive editing. What is it for? When would I use it? and How?

I can go read the manual now, but I just realised this is one of the few things I know about in Emacs without really knowing what it's actually for / what editing problem it solves.

I imagine it might be interesting enough to post here on r/Emacs.

submitted by instant_sunshine
[link] [7 comments]

July 28, 2014 03:41 PM

CompsciOverflow

Complexity of the decision version of determining a min-cut

I was wondering what the complexity of the following problem is:

Given: A flow network $N$ with a source $s$, sink $t$ and a number $k$.
Question: Is there an $s$-$t$ cut of capacity at most $k$?

Obviously, the problem is in P by standard methods. When trying to logspace reduce the P-complete LP-optimization problem, I encountered the following problems:

  • If the LP problem has no feasible solution, then $t$ should not be reachable from $s$. In this case, checking whether an LP problem has a feasible solution would be quite simple (don't know whether this is true).
  • If the LP problem is unbounded, then the min-cut should be unbounded as well. This could possibly be the case if $s$ and $t$ are identical. Again, checking whether an LP has an unbounded solution would be even easier.

For the lower bound, NL-hardness is quite easy: For a given directed graph $G$ with nodes $s$ and $t$, just take $G$ as a flow network and assign capacity 1 to each edge. Then ask if there is an $s$-$t$ cut of capacity at most 0. $t$ is reachable from $s$ if and only if the answer to this question is no. Since coNL=NL, we are done.

Moreover, the problem is NL-complete for outerplanar graphs. For this, $s$ and $t$ are joined by an edge of infinite capacity. This edge creates the two new faces $f_1$ and $f_2$. Now, answer the question whether there is a path of length at most $k$ from $f_1$ to $f_2$ in the dual graph.

I do not see how to generalize this to general graphs nor how to overcome the problems described when reducing LP optimization.

by Oliver Witt at July 28, 2014 03:37 PM

Xampp installation [on hold]

Windows7 and Windows8 are installed on separate drives C: and E: respectively on my destop PC. XAMPP is installed on C: where Windows7 is installed and running. Can I install XAMPP on E: where windows8 is running? While I try to install XAMPP on E: Apache fails to install. Port 83 already used is shown. What Can I do?

by Bishwajit Paul at July 28, 2014 03:34 PM

/r/netsec

QuantOverflow

Option based portfolio insurance in practice

My question is about option based portfolio insurance in practice.

Some insurance companies offer products where there is a mutual fund (equity and bonds) and a guarantee attached. This guarantee is usually given by some investment bank.

The bank can either apply a CPPI model or it can apply option based insurance meaning that it acts as if it had sold a put option on the mutual fund. Talking about the option case: what do banks in practice do?

Is it usual to

  • hedge the risk by selling futures?
  • swap the risk somehow?
  • apply strict risk limits for the mutual fund?
  • demand a certain strategy for the fund?
  • anything else?

What is the best practice for options based portfolio insurance?

EDIT: To be more precise about the product: in Austria we have a pensions system financed by the state (by contributions by the labor force). As this will probably not suffice for the future the population is motivated to invest in retirement pension insurance contracts.

These insurance contracts are mainly saving contracts where monthly payments are invested in a mutual fund that holds 30% stocks and 70% bonds. Additional to this there are premiums by the state. Furthermore these insurance contracts need to have guarantees attached such that you get at least all your money invested back after a minimum time of say 10 years.

I assume that this system is applied in other countries as well. So this question is rather general.

final edit: there are insurance companies who sell these products in retail. Then there is the guarantee that can be implemented by an investment bank as option based portfolio insurance. I hope the term "insurance" is clear wherever it appears in my question.

by Richard at July 28, 2014 03:30 PM

/r/compsci

Questions for CS graduate/PhD students

Hi, I'm a rising junior at state flagship in the Northeast studying CS and math. My school has a solid CS program (ranked top 20/25 according to graduate rankings). I started working in a CS professor's lab doing machine learning work starting fall of my sophomore year. I really enjoyed the experience, and am seriously considering graduate school at this point. However, I have some questions for you guys:

  • I have no interest in staying in academia. I would prefer not be a Professor. My goals after school would be to move to a major city (NYC, Boston, Seattle,etc.) and work in industry (research lab, startup, whatever). My main reasons for doing a PhD are that I can really dive deep into ML type stuff. I really don't have any interest in web dev or stuff like that, and it seems that being a PhD student allows for a lot of flexibility in choosing what things someone wants to work on. Asides from that, intellectual fulfillment is also motivator. Ultimately, I want to work on the cutting edge, and I think a PhD can help me build the skills to do so. Are these intentions/motivations sensible?
  1. For current/past students, how was your experience? If you don't mind me asking- what areas are you working on, what university, how do you feel about your decision so far, etc? What was your undergrad profile like? When did you seriously start considering grad school?

  2. How do grad admissions work? I'm assuming recommendations, research experience, and grades/GRE are the three main criteria. I have an alright GPA- 3.76, but I can get it upto a ~3.85 by application time (I had a bad semester frosh year- 3.0 flat). What about the statement of purpose? I might be able to coauthor a publication with my PI by the time I apply, how much does that help? I took a mock GRE and got a 800Q, 710Verbal, and 5 on the writing. How hard is it to get into CMU/Berkeley/MIT/Stanford?

  3. Joining multiple groups: Right now, I'm in one ML group, but I am also considering joining another lab that works on database stuff. Is working for multiple groups as an undergrad advisable?

  4. Summer REUs: I have an industry internship for this summer, but should I chose REUs over industry internships? I didn't really apply to any REUs, and I enjoy industry internships for the summer (9-5 can be pretty relaxing!).

  5. Any advice, things you wished you did or didn't would be greatly appreciated.

Thanks for reading!

submitted by eryf
[link] [30 comments]

July 28, 2014 03:27 PM

StackOverflow

Streaming a CSV file to the browser in Spray

In one part of my application I have to send a CSV file back to the browser. I have an actor that replies with a Stream[String], each element of the Stream is one line in the file that should be sent to the user.

This is what I currently have. Note that I'm currently returning MediaType text/plain for debugging purposes, it will be text/csv in the final version:

trait TripService extends HttpService
                          with Matchers
                          with CSVTripDefinitions {
  implicit def actorRefFactory: ActorRefFactory

  implicit val executionContext = actorRefFactory.dispatcher

  lazy val csvActor = Ridespark.system.actorSelection(s"/user/listeners/csv").resolveOne()(1000)

  implicit val stringStreamMarshaller = Marshaller.of[Stream[String]](MediaTypes.`text/plain`) { (v, ct, ctx) =>
    ctx.marshalTo(HttpEntity(ct, v.mkString("\n")))
  }

  val route = {
    get {
          path("trips" / Segment / DateSegment) { (network, date) =>
            respondWithMediaType(MediaTypes.`text/plain`) {
                                                            complete(askTripActor(date, network))
                                                          }
                                                }
        }
  }

  def askTripActor(date: LocalDate, network: String)
                  (implicit timeout: Timeout = Timeout(1000, TimeUnit.MILLISECONDS)): Future[Stream[String]] = {
    csvActor flatMap { actor => (actor ? GetTripsForDate(date, Some(network))).mapTo[Stream[String]] }
  }
}

class TripApi(info: AuthInfo)(implicit val actorRefFactory: ActorRefFactory) extends TripService

My problem with this is that it will load the whole CSV into memory (due to the mkString in stringStreamMarshaller). Is there a way of not doing this, streaming the lines of the CSV as they are generated?

by Mario Camou at July 28, 2014 03:25 PM

Mocking Scala Trait using Scala, ScalaTest, and Mocktio

For whatever reason Mocktio will not mock a method I have in a trait, it will call the actual method. Here is my test:

"displays the index page" in {
  val mockAuth = mock[AuthMethods]
  when(mockAuth.isAllowed(-1, "", "")).thenReturn(true)
  val controller = new TestController()
  val result = controller.index().apply(FakeRequest())
  val bodyText = contentAsString(result)
  bodyText must include ("Name")
}

Here is the trait and object:

trait AuthMethods {
  def isAllowed(userID:Long, method:String, controller:String) : Boolean = {
     //do stuff..
  }
object Authorized extends AuthMethods with ActionBuilder [Request] {
  def invokeBlock[A](request: Request[A], block: (Request[A]) => Future[Result]) = {
    if(isAllowed(userID, method, controller) {
       //do some more stuff..
  }

Any thoughts on why its calling the actual method verses the mocked method? I am using Scala 2.10.4. Any help would be appreciated.

I forgot to mention, Authorized is a Action Composition and here is how it is being used:

  def index = Authorized {
    Ok(html.Stations.index(Stations.retrieveAllStations))
  } 

by James Little at July 28, 2014 03:17 PM

CompsciOverflow

Logic Questions about java objects and their inheritence

If I have 3 java objects:

public class A {...}
public class B extends A {...}
public class A extends B {...}

Which of the following objects can I create, and why?

A object1 = new B();
A object1 = new C();
B object1 = new C();
C object1 = new B();

by AJHacker at July 28, 2014 03:12 PM

StackOverflow

Return plain map from a Clojure record

I have a record:

(defrecord Point [x y])
(def p (Point. 1 2))

Now I want to extract just the map from the record. These ways get the job done. Are these good ways? Are there better ways?

(into {} (concat p))
(into {} (map identity p))
(apply hash-map (apply concat p))

I was hoping there might be a cleaner way, perhaps built-in to the notion of a record.

by David James at July 28, 2014 03:12 PM

DataTau

TheoryOverflow

Which formalism is best suited for automated theorem proving in set theory?

Abbreviations - FOL is first-order logic; NBG is Von Neumann–Bernays–Gödel set theory; SEP is Stanford Encyclopedia of Philosophy; HOL is higher-order logic; ATP is automated theorem proving.

Context - An entire section of TPTP’s [1] axioms is devoted to set theories - for example NBG - for FOL theorem provers. Art Quaife wrote an entire book [2] on axiomatizing NBG in FOL.

It seems to me - These axioms all take membership as a sort of undefined concept, and then build subsets, power sets, union, difference, and so on.

In contrast, here is what SEP has to say on HOL (emphasis mine) -

Second-order logic is an extension of first-order logic where, in addition to quantifiers such as “for every object (in the universe of discourse),” one has quantifiers such as “for every property of objects (in the universe of discourse).”   This augmentation of the language increases its expressive strength, without adding new non-logical symbols, such as new predicate symbols.   For classical extensional logic (as in this entry), properties can be identified with sets, so that second-order logic provides us with the quantifier “for every set of objects.”

It seems to me - that the concept of membership can only be represented in HOL. If that is true, how can FOL axiom schemas, taking membership as an undefined concept, prove theorems in set theory?

What I am trying to do - I need to do ATP in a theory which is FOL+sets. I was unsure whether to use an FOL prover (say, Prover9) or an HOL prover (say, HOL Light). At present, I’m using SNARK (FOL) with Quaife’s axiomatization of NBG, which is unable to prove my theorems (see below for example).

Question - Is this failure to be expected, since FOL cannot ‘understand’ membership, and hence I need an HOL prover? Or am I misunderstanding / doing something wrong?

[1] The TPTP Problem Library for Automated Theorem Proving, http://www.cs.miami.edu/~tptp/ [2] Automated Development of Fundamental Mathematical Theories, by Art Quaife, Kluwer Acadamic Publishers (1992)

Finally, an example -

Here is SNARK code representing three axioms, and a theorem, which SNARK is unable to prove. SNARK has also been given Quaife's axioms of NBG set theory. (Note member, the set membership function.) -

(assert  '(forall (x y z)
    (implies
      (and
        (part-of x y)
        (part-of y z)
      )
      (part-of x z)   ) ) :name '1point1point1)

(assert  '(forall (x y alpha t)
    (exists (z arb-part)
      (implies
        (and
          (and
            (member t alpha)
            (part-of t x)
          )
          (implies
            (part-of y x)
            (and
              (member z alpha)
              (and
                (part-of arb-part z)
                (part-of arb-part y)
        ) ) ) )
        (sum-of x alpha)   ) ) ) :name '1point1point2)

(assert  '(forall (alpha)
    (exists (arb-member sum)
      (implies
        (member arb-member alpha)
        (sum-of sum alpha)
) ) ) :name '1point1point3)

(prove  '(forall (x)
    (part-of x x)   ) :name '1point1point4)

EDIT -

Jake said - "set theories are often defined in first order logic". In fact, the question I was asking was - "Can set theories be defined in first order logic, in an automated theorem prover?". That's what I meant, when I said "is set theory equivalent to first order logic?" (which is an incorrect way of putting it, and for which I apologize).

It turns out that ZFC cannot be represented in an FOL theorem prover, since it is not finitely axiomatizable. NBG and one other set theories can, and several such axiomatizations exist in TPTP.

However, these axiomatizations seem to take the concept of membership as undefined, and build subsets, power sets, etc on that. The SEP paragraph I quoted seems to suggest that representing membership needs HOL. In such a case, it seems paradoxical to me that FOL provers are expected to prove theorems in set theory, knowing nothing of membership.

I'm sure this is not a real paradox - just something I don't understand. Hence the question. I hope that makes it clearer.

by Atriya at July 28, 2014 02:55 PM

Prerequisites for theoretical computer science

I am a freshman and a Computer Science major,I have a very poor understanding in the area of electrical and electronics.I want to pursue a career in theoretical computer science esp. Quantum computing.So does my above weakness pose a barrier and prevent me from pursuing a career in Theoretical CS. Also what qualities/prerequisites are required for pursuing a career in the above field. Any person(in the area of quantum computing/theoretical CS)please advise me how to proceed? ..............Thank you............

by rohit D at July 28, 2014 02:45 PM

CompsciOverflow

Extend the causal memory implementation to wide-area distributed storage systems

In the seminal paper "Causal memory: definitions, implementations, and programming", distributed causal memory is defined to ensures that all the processes in a system agree on the relative ordering of operations that are causally related.

Its implementation:

For its implementation, each process maintains a private copy of the abstract shared causal memory. All the processes are peer-equivalent, each of which invokes (as a client) and handles with (as a server) read/write operations itself and communicates asynchronously with each other. More importantly, vector clocks are used to track causality between operations. Specifically, when an (write) operation is invoked (by some process as a client), a new vector clock is generated and assigned to it. In other words, the implementation (in section 5) keeps one entry in each vector clock per client.

Extending it to wide-area distributed data storage system:

The situation is depicted in the following figure. Now consider a wide-area distributed data storage system that aims to implement a causal memory. I have the following problems concerning about the scalability of the implementation mentioned above.

My problems:

  1. The clients in such system can be enormous and even unpredictable. Does this mean that it is not practical (if not impossible) to adopt the vector clock mechanism that keeps one entry per client?
  2. If the vector clock keeping one entry per client is not feasible, how to track the causality between operations? Are there any research papers or systems on this issue?
  3. The architecture depicted in the figure is definitely not applicable to the wide-area distributed data storage system because we cannot simply provide an exclusive server for each client. Then, what are the appropriate architectures in this situation?

p2p_localarea_replication

by hengxin at July 28, 2014 02:32 PM

/r/clojure

/r/emacs

/r/clojure

Portland Pattern Repository

StackOverflow

Convert Akka Iterable to java.lang.Iterable?

I would like to iterate over the children of a given Actor in a for-each loop, like so:

    for(ActorRef child: this.getContext().children()){
      // do things
    }

This yields an error though:

    HelloWorld.java:78: error: for-each not applicable to expression type
                    for(ActorRef child: this.getContext().children())
                                                                  ^
      required: array or java.lang.Iterable
      found:    Iterable<ActorRef>
    1 error

The docs for UntypedActorContext say that the children() method should return an 'Iterable[ActorRef]', but the inline-hyperlink for the type definition for that particular 'Iterable' leads to the docs for the Scala Iterable-type rather than the Java type, which are not the same thing.

This can be confirmed in practice: the object returned from the children() call fails an "instanceOf Iterable" check, and calling "getClass()" on it returns "class akka.actor.dungeon.ChildrenContainer$ChildrenIterable".

It seems pretty clear to me that this is not a Java Iterable and that the error is appropriate. But how do I coerce or marshall it into a Java Iterable? This link in the Scala docs suggests that there are conversion functions for Scala->Java, but I cannot make heads or tails of what to import or how to call them from Java; the only examples I've seen have been for Scala.

P.S. I realize I can probably use a while-loop and the Scala-Iterator returned by children().iterator() to construct the equivalent of a for-each loop here. What I'm really after is understanding how to use the type-conversion routines that Scala provides.

by SAyotte at July 28, 2014 01:57 PM

UnixOverflow

Why does creating a ZPool result in this error?

When I try to create a ZPool, the following error occurs:

user@arch ~ % sudo zpool create -f -o ashift=12 -m /data media raidz /dev/disk/by-id/ata-ST2000DM001-1CH164_Z2F0TL8V /dev/disk/by-id/ata-ST2000DM001-1ER164_Z4Z030LK /dev/disk/by-id/ata-ST2000DM001-1ER164_Z4Z06PR
the kernel failed to rescan the partition table: 16  
cannot label 'sda': try using parted(8) and then provide a specific slice: -1

I have tried running the command multiple times back to back (running udevadm trigger in between too), clearing the drives using sgdisk -Z /dev/sdX. I tried parted /dev/sdX mklabel gpt as well as zpool labelclear /dev/sdX.

I have referred to the drives by /dev/disk/by-id as well as /dev/sdX but the same error occurs where the label changes depending on the order of the drives.

by Ruben at July 28, 2014 01:53 PM

Overcoming Bias

Lost For Words, On Purpose

When we use words to say how we feel, the more relevant concepts and distinctions that we know, the more precisely we can express our feelings. So you might think that the number of relevant distinctions we can express on a topic rises with a topic’s importance. That is, the more we care about something, the more distinctions we can make about it.

But consider the two cases of food and love/sex (which I’m lumping together here). It seems to me that while these topics are of comparable importance, we have a lot more ways to clearly express distinctions on foods than on love/sex. So when people want to express feelings on love/sex, they often retreat to awkward analogies and suggestive poetry. Two different categories of explanations stand out here:

1) Love/sex is low dimensional. While we care a lot about love/sex, there are only a few things we care about. Consider money as an analogy. While money is important, and finance experts know a great many distinctions, for most people the key relevant distinction is usually more vs. less money; the rest is detail. Similarly, evolution theory suggests that only a small number of dimensions about love/sex matter much to us.

2) Clear love/sex talk looks bad.  Love/sex are to supposed to have lots of non-verbal talk, so a verbal focus can detract from that. We have a norm that love/sex is to be personal and private, a norm you might seem to violate via comfortable impersonal talk that could easily be understood if quoted. And if you only talk in private, you learn fewer words, and need them less. Also, a precise vocabulary used clearly could make it seem like what you wanted from love/sex was fungible – you aren’t so much attached to particular people as to the bundle of features they provide. Precise talk could make it easier for us to consciously know what we want when, which makes it harder to self-deceive about what we want. And having available more precise words about our love/sex relations could force us to acknowledge smaller changes in relation status — if “love” is all there is, you can keep “loving” someone even as many things change.

It seems to me that both kinds of things must be going on. Even when we care greatly about a topic, we may not care about many dimensions, and we may be better off not being able to express ourselves clearly.

by Robin Hanson at July 28, 2014 01:45 PM

Lobsters

/r/compsci

I can have a 9 hours formation, what should I choose ?

Greetings,

Thanks to my job and the fact that I've been working there for two years, I am allowed to take 9 hours of training in basically any field/matter that I desire. I work in web development, but that's clearly not my aimed job for when I have finished my studies, and I don't intend to choose a training that corresponds to the work I'm doing as my job, since I'm already knowledged enough in it.

What interests me for long term is rather software/game development. It is a really vast field and that's why I seek help to choose a training. Ideally I would like a training within this general field (and it can range from pure development to 3D modelization or anything else), and I'm wondering what training would be the most beneficial in 9 hours, i.e bring me the most useful knowledge and maybe touch a subject that may be hard to self-teach.

Any ideas or suggestions ? Thanks a bunch !

submitted by HerrDrFaust
[link] [4 comments]

July 28, 2014 01:12 PM

/r/clojure

StackOverflow

"built in dependency injection" in scala

Hi the following post says there is "built in dependency injection" in scala

"As a Scala and Java developer, I am not even slightly tempted to replace Scala as my main language for my next project with Java 8. If I'm forced to write Java, it might better be Java 8, but if I have a choice, there are so many things (as the OP correctly states) that make Scala compelling for me beyond Lambdas that just adding that feature to Java doesn't really mean anything to me. Ruby has Lambdas, so does Python and JavaScript, Dart and I'm sure any other modern language. I like Scala because of so many other things other than lambdas that a single comment is not enough.

But to name a few (some were referenced by the OP)

Everything is an expression, For comprehensions (especially with multiple futures, resolving the callback triangle of death in a beautiful syntax IMHO), Implicit conversions, Case classes, Pattern Matching, Tuples, The fact that everything has equals and hashcode already correctly implemented (so I can put a tuple, or even an Array as a key in a map), string interpolation, multiline string, default parameters, named parameters, built in dependency injection, most complex yet most powerful type system in any language I know of, type inference (not as good as Haskell, but better than the non existent in Java). The fact I always get the right type returned from a set of "monadic" actions thanks to infamous things like CanBuildFrom (which are pure genius). Let's not forget pass by name arguments and the ability to construct a DSL. Extractors (via pattern matching). And many more.

I think Scala is here to stay, at least for Scala developers, I am 100% sure you will not find a single Scala developer that will say: "Java 8 got lambdas? great, goodbye scala forever!". Only reason I can think of is compile time and binary compatibility. If we ignore those two, all I can say is that this just proves how Scala is in the right direction (since Java 8 lambdas and default interface methods and steams are so clearly influenced)

I do wish however that Scala will improve Java 8 interoperability, e.g. support functional interfaces the same way. and add new implicit conversions to Java 8 collections as well as take advantage to improvements in the JVM.

I will replace Scala as soon as I find a language that gives me what Scala does and does it better. So far I didn't find such a language (examined Haskell, Clojure, Go, Kotlin, Ceylon, Dart, TypeScript, Rust, Julia, D and Nimrod, Ruby Python, JavaScript and C#, some of them were very promising but since I need a JVM language, and preferably a statically typed one, it narrowed down the choices pretty quickly)

Java 8 is by far not even close, sorry. Great improvement, I'm very happy for Java developers that will get "permission" to use it (might be easier to adopt than Scala in an enterprise) but this is not a reason for a Scala shop to consider moving back to Java." [1]

what is exactly the built in dependency injection in scala?

by Jas at July 28, 2014 01:08 PM

CompsciOverflow

Conceptual question about entropy and information

Shannon's entropy measures the information content by means of probability. Is it the information content or the information that increases or decreases with entropy? Increase in entropy means that we are more uncertain about what will happen next.

  1. What I would like to know is if entropy increases, does this mean that information increases?

  2. If there are 2 signals, one is the desired and the other is the measurement signal. Let error be the difference between the two. Or error can be the estimation error in the context of weight learning.

What can we infer if the entropy of this error term decreases? Can we conclude that the error is reducing and the system is behaving close to the desired signal's behavior?

Shall be grateful for these clarifications

by Ria George at July 28, 2014 01:04 PM

StackOverflow

Scala 2.11 complains with: multiple overloaded alternatives of method

I have this class:

case class Columna[T](nombre: String)

class Tabla {
    def leeTodo[T](col: Columna[T], filtro: String, orderBy: String = "") = ...

    def leeTodo[T0, T1](col0: Columna[T0], col1: Columna[T1], filtro: String, orderBy: String = "") = ...

    def leeTodo[T0, T1, T2](col0: Columna[T0], col1: Columna[T1], col2: Columna[T2], filtro: String, orderBy: String = "") = ...

}

object admver extends Tabla {
}

This code used to compile with Scala 2.10.4. Now, I'm trying Scala 2.11.2, and I get this error message:

Error:(3, 8) in object admver, multiple overloaded alternatives of method leeTodo define default arguments.
The members with defaults are defined in class Tabla in package bd and class Tabla in package bd and class Tabla in package bd and class Tabla in package bd and class Tabla in package bd and class Tabla in package bd.

I think that there is no ambiguity between the different overloads.

I wonder if this is a bug in Scala 2.11 or a new "feature".

I've seen some old questions about the subject:

but don't know if they still apply.

by david.perez at July 28, 2014 12:45 PM

/r/emacs

Emacs auto-decrypts .gpg files since my last update, it feels wrong to me

Hi there,

I have an encrypted foobar.org.gpg file on disk where I jot down things that I don't like others to see. Until yesterday, every time I closed and re-opened this file, I had to re-enter the password (which I like).

I did a system upgrade today (pacman -Syyu on Arch Linux) and have Emacs 24.3.1 now. With that, the file gets auto decrypted when I reopen it. It seems to cease after a reboot. I don't like that behavior, I want to re-enter the password every time I open it.

Does anyone know

  • what happened there, and
  • how to get the old behavior back?
submitted by goddammitbutters
[link] [3 comments]

July 28, 2014 12:43 PM

/r/compsci

Prospective CS student, Can you hep me out with some questions?

Hello.

I am a senior in high school and we are doing this project were we look into our academic future, this includes researching careers and universities, asking questions, as well as getting some hands on experience in our preferred field.

I have seriously thought about majoring on Computer Science for a while now, it seems like it would fit my abilities and what I like.

However, I have no experience in this field, which means I don't know what it is actually like to study it in university or working on it at an actual job.

So, if you don't mind I'd be very grateful if you could answer a few questions for me.

  1. Why did you choose Computer Science?

  2. What would you recommend to someone who is interested in studying CompSci?

  3. Which are some jobs a CompSci major will probably be doing?

  4. What it is the job market like, currently and in the next 5 to 10 years, for a CompSci graduate?

Thank you for taking the time to read my post.

P.S. It would be very helpful if you could tell me how you are related to Computer Science, e.g. student, professor, professional, etc.

submitted by MIREVI
[link] [7 comments]

July 28, 2014 12:43 PM

CompsciOverflow

Turing degree of incomputable definable reals [on hold]

What would be the turing degree of incomputable definable real numbers and would every incomputable definable real numbers share same turing degree?

by Communi at July 28, 2014 12:41 PM

QuantOverflow

Non-Negativity of up-factor and down-factor in Binomial No-Arbitrage Pricing Model

Consider a stock which is trading at $S_0$ at time $t=0$ and is expected to be trading at price $uS_0$ or $dS_0$ at time t=1 where $u$ and $d$ are up-factor and down-factor. The theory says that to rule out the arbitrage, we must assume that : $0<d<1+r<u?$ Can someone explain how does this assumption takes care of no-arbitrage?

by QuantNut at July 28, 2014 12:41 PM

StackOverflow

Does Ruby perform Tail Call Optimization?

Functional languages lead to use of recursion to solve a lot of problems, and therefore many of them perform Tail Call Optimization (TCO). TCO causes calls to a function from another function (or itself, in which case this feature is also known as Tail Recursion Elimination, which is a subset of TCO), as the last step of that function, to not need a new stack frame, which decreases overhead and memory usage.

Ruby obviously has "borrowed" a number of concepts from functional languages (lambdas, functions like map and so forth, etc.), which makes me curious: Does Ruby perform tail call optimization?

by Charlie Flowers at July 28, 2014 12:30 PM

Javac not installed with openjdk-6-jdk

I have been trying some different java compilers over the weekend and decided to stick with javac this morning. I then proceeded to clean up the mess that was caused by my testing and removed every last trace of java and did a fresh 'apt-get install openjdk-6-jdk' after autoremove and autoclean.

The following weirdness was then encountered:

tarskin@5-PARA-11-0120:~$ javac
The program 'javac' can be found in the following packages:
 * openjdk-6-jdk
 * ecj
 * gcj-4.4-jdk
 * gcj-4.6-jdk
 * gcj-4.5-jdk
 * openjdk-7-jdk
Try: sudo apt-get install <selected package>

I had allready installed openjdk but i tried it anyhow yielding:

tarskin@5-PARA-11-0120:~$ sudo apt-get install openjdk-6-jdk
[sudo] password for tarskin: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
openjdk-6-jdk is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
tarskin@5-PARA-11-0120:~$ 

I know i must be doing something stupid but I have no idea what, if anyone else could give a pointer in the right direction that would be very much appreciated...

Cheers

EDIT: Found some other weird aspects about the 'new' instance of my java distro, it doesn't seem to recognise for example 'Pattern' or 'Matcher' that should be coming from the regex import shrugs.

by Bas Jansen at July 28, 2014 12:29 PM

Undeadly

g2k14: Andrew Fresh on Programming Perl

This was my first hackathon so I wasn't really sure what to do. I had some plans to possibly import perl 5.20, but I was hoping to include 5.20.1 which isn't out for at least a month and espie@ says that it is too close to lock. I will continue to push local patches upstream to get everyone using perl on OpenBSD to be using the same improvements that are in our base perl.

Read more...

July 28, 2014 12:28 PM

QuantOverflow

Why is the equity premium not arbitraged away?

The Equity Risk Premium Puzzle concerns the observation that equity returns are generally greater than bond returns.

The puzzle is well known and widely studied, what is keeping investors from shorting bonds and buying equity? Why hold bonds at all when their expected rate of return is clearly lower. Are there other advantages to holding bonds?

by A.L. Verminburger at July 28, 2014 12:28 PM

Undeadly

Ingo Schwarze Interviewed on BSDTalk

The latest episode of BSDTalk involves our very own Ingo Schwarze (schwarze@):

bsdtalk243 - mandoc with Ingo Schwarze

Interview about mandoc with Ingo Schwarze. The project webpage describes mandoc as "a suite of tools compiling mdoc, the roff macro language of choice for BSD manual pages, and man, the predominant historical language for UNIX manuals."

Recorded at BSDCan 2014.

July 28, 2014 12:26 PM

TheoryOverflow

How to translate the axiom schema of induction by Curry-Howard?

I'm trying to understand the Curry-Howard correspondence. I am comfortable with it for propositional logic, but get confused when $\forall, \exists$ quantifiers come in the picture.

The axiom schema of induction (in second-order logic) is $\forall P. \left[(P(0) \implies \forall k\in \mathbb N. (P(k)\implies P(k+1)))\implies \forall n\in \mathbb N. P(n)\right]$.

To my understanding, via Curry-Howard every statement gets translated to a type. Proving a statement means showing that the type is inhabited. For example, +, = are dependent types taking two parameters of type nat; every $\forall k\in A$ gets translated into a dependent product $\prod_{k::A}$. So the axiom schema should become something like $ \left[\prod_{P::?} \left((P\, 0) \rightarrow \left(\prod_{k::\mathbb N} (P \, k)\rightarrow P \, (+\, k\, 1) \right)\right)\right]\rightarrow \prod_{n::\mathbb N}(P \,n) $

A difficulty arises when translating $\forall P$: what is the type of $P$? The problem is that, in order for the above to make sense, $(P\, k)$ is supposed to be a type for all $k::\mathbb N$ (so we are allowed to say $(P\, 0) \rightarrow \cdots$, for instance). So $P$ seems to me like a (dependent) type constructor taking a nat as parameter. But then we are taking a product over $P$, so we must be able to treat $P$ as an element of a type! The problem seems to boil down to, are type constructors themselves of some type? Can we write this statement without having to refer to such a type? Perhaps higher-order logic translates to higher-order type theory, but I do not know the latter. References would also be appreciated.

by Holden Lee at July 28, 2014 12:22 PM

StackOverflow

Check if a key exists in play.api.libs.json.Json

contains like functionality for play.api.libs.json.Json

val data=Map("id" -> "240190", "password" -> "password","email" -> "email")

data.contains("email")//true


val info=Json.obj("id" -> "240190", "password" -> "password","email" -> "email")

now how to check info contains email or not?

by Govind Singh Nagarkoti at July 28, 2014 12:11 PM

/r/netsec

QuantOverflow

Applications of Fourier theory in trading

What are fashionable applications of Fourier analysis in trading? I have heard vague ideas of applications in High Frequency Trading but can somebody provide an example, maybe a reference?

Just for clarification: The approach to split up a stock price in its cosines and to apply this for forecasts or anything similar seems theoretically not justified as we can not assume the stock price to be periodic (outside of the period of observation). So I don't really mean such applications.

Put differently: are there useful, theoretically valid applications of Fourier theory in trading? I am curious for any comments, thank you!

EDIT: I am aware of (theoretically $100\%$ valid) applications in option pricing and calculation of risk measures in the context of Lévy processes (see e.g. here p.11 and following and references therein). This is well established, I guess. What I mean are applications in time series analysis. Sorry for any confusions.

by Richard at July 28, 2014 12:08 PM

Planet Emacsen

Irreal: git-timemachine

Recently, I’ve seen several references to the git-timemachine package. It didn’t seem that interesting to me so I ignored it. Then I noticed that Bozhidar Batsov is recommending it on Emacs Redux. When Batsov recommends something, it’s generally an indication that that something is worth a look.

So I loaded git-timemachine from Melpa and started playing with it. It provides a functionality that, as far as I know, is missing or hard to use in git or magit. When you invoke git-timemachine on a file, you can scroll through all the versions of the file in git. This isn’t the commit records but the actual file—you get the ultimate in context.

If you often—or even sometimes—find yourself looking at older version of a file, you should take a look at this package. It’s easy to load and try out with ELPA. You don’t even need to adjust your .emcas or init.el, just load the package and start using it. If you decide you don’t like it, just uninstall it. It’s definitely worth a look.

by jcs at July 28, 2014 11:56 AM

StackOverflow

Which is the most idiomatic way to "lift up" by some transformation both arguments of a binary function in Haskell?

Which is the most idiomatic way to "lift up" by some transformation both arguments of a binary function in Haskell? Let this operator be named "lift", so I expect it's type will be

lift :: (a -> b) -> (b -> b -> c) -> (a -> a -> c)

and a naive definition will be

lift t f = \x y -> f (t x) (t y)

by ramntry at July 28, 2014 11:54 AM

CompsciOverflow

StackOverflow

IPython parallel computing vs pyzmq for cluster computing

I am currently working on some simulation code written in C, which runs on different remote machines. While the C part is finished I want to simplify my work by extending it with a python simulation api and some kind of a job-queue system, which should do the following:

1.specifiy a set of parameters on which simulations should be performed and put them into a queue on a host computer

2.perform simulation on remote machines by workers

3.return results to host computer

I had a look at different frameworks for accomplishing this task and my first choice goes down to IPython.parallel. I had a look at the documentation and from what I tested out it seems pretty easy to use. My approach would be to use a load balanced view like explained at

http://ipython.org/ipython-doc/dev/parallel/parallel_task.html#creating-a-loadbalancedview-instance

But what I dont see is:

  • what happens i.e. if the ipcontroller crashes, is my job queue gone?
  • what happens if a remote machine crashes? is there some kind of error handling?

Since I run relatively long simulations (1-2 weeks) I don't want my simulations to fail if some part of the system crashes. So is there maybe some way to handle this in IPython.parallel?

My Second approach would be to use pyzmq and implement the jobsystem from scratch. In this case what would be the best zmq-pattern for this situation?

And last but not least, is there maybe a better framework for this scenario?

by jrsm at July 28, 2014 11:47 AM

Play framework, syntax highlighting in templates

I have Intellij community edition, is there a way I can get syntax highlighting in templates? (the play plugin works only with Ultimate editions i've been told, and I don't know if that is required for highlighting (or if that even has highlighting))

by user247077 at July 28, 2014 11:22 AM

Specify function composition through declarative maps in F#

The Clojure Prismatic/Plumbing library can be used in order to provide a declarative and explicit definition of an application or module functions' graph.

In short, it provides a means to specify each function as a node with a label, which is also the output label, the labeled inputs, and an implementation. It uses a custom keyword (fkn) defined in a macro for this purpose.

We have to develop a module in F# which performs relatively complex calculations in a hierarchical fashion that could benefit from Prismatic features, namely:

  • A graph can be built easily from function map, just taking inputs as dependencies. This graph can be analyzed, checked and visualized with very little code. Also subgraphs could be written providing further flexibility (valuable in our domain).
  • Function execution can be monitored. Not only performance but values in and out of each function.
  • Testing and Mocking of the system is really easy.

More on these and other features on github and infoq presentation:

What would be the fsharpest way to program this declarative definition of functions in a map in order to get these features?

by jruizaranguren at July 28, 2014 11:11 AM

Fred Wilson

On Getting An Outside Lead

There are some “truths” in the venture capital business that I have been hearing since I got into this game in the mid 80s. One of them is that getting “third party validation” by going outside of the current investor syndicate to find a new lead is good for the investors. I have come to believe this “wisdom” is nothing more than lack of conviction on the investor’s part.

What “super powers” do VCs have that allow them produce above average returns year after year after year? Well you could argue that some of us have the ability to see things before others see them. That might be true but it is hard to sustain that for a long time. You might argue that some of us have brands that allow us to get into the conversations with the best entrepreneurs when others can’t. That is most certainly true. You could argue that some of us have a tight focus on an investment strategy and work it tirelessly and don’t veer from it. That is most certainly true.

But short of those three things, I am not aware of a sustainable model that produces above average returns on investing in “new names”. However, there are two “super powers” that VCs have at their disposal that can produce above average returns year after year if they use them correctly. Those are the right to a board seat and the right to invest in round after round after round. I talked a bit about the latter one last week.

Taken together, these two rights put VCs in a position to intelligently invest in their existing portfolio companies. I believe that you can turn an average portfolio producing average returns into an average portfolio producing above average returns by intelligently investing in your existing portfolio companies.

It is one thing to take your pro-rata, and I talked a lot about that last week. But it is another thing to lead the next round and increase your ownership. It’s this latter move that I think many of us in the VC business instinctively avoid for fear that we are “falling in love with our companies.” Anyone who has been in the VC business for a long time has made the mistake of believing too much in a portfolio company and supporting it beyond when you rationally should. I have made that mistake so many times I can’t count them on two hands. It is my signature failure and I have not been able to stop doing it.

But, I would argue, the worse mistake is to know you’ve got a winner in your portfolio long before anyone else knows it and you allow a new investor to come in and lead the next round when you easily could and should. The upside on your best investments is the thing that allows an early stage VC to take so much risk and lose money on so many investments. Increasing the upside on the best investments is a rational move in light of the distribution of outcomes in a VC fund.

I would caveat all of this with a few things:

1) You have to let the entrepreneur do what they think is best for them and their company. If they want an outside lead, then by all means you should support that and work as hard as you can to make it happen.

2) You have to think about the amount of “dry powder” the current syndicate has and make sure that you aren’t using all of it up by leading a round when you should really be bringing in a new investor.

3) If an insider is leading a round, you should put a very fair deal on the table for the entrepreneur and the company. An inside lead is not about getting a “sweetheart” deal. It is about putting in place a fair deal for everyone.

4) If the valuation expectations of the founder and the company are unrealistic, then you should suggest that they go test the market. If there is a better offer out there at a better price than you would pay, that is always a good outcome for everyone.

There is a lot of signaling risk in all of this. If you are known to be aggressive in offering to lead inside rounds, and you don’t make that offer, then that puts the entrepreneur in a tricky spot. Of course the entrepreneur can say that they don’t want an inside lead and they want to expand the investor base. But even so, smart investors may know. Truth be told, there is signaling risk in everything that the existing investors do and anyone who thinks otherwise is just not seeing straight.

Two of my favorite examples of this strategy are YouTube and our portfolio company Etsy. At YouTube, Sequoia led the Series A and as far as I can tell (I’m not 100% sure), they led every round after that until the company sold to Google. That allowed Sequoia to allocate more and more capital to what was an incredibly great company and investment and get a massive return on a sale that sure felt like a monster at the time. At Etsy, USV participated in the seed round with some angel investors. We led the Series A and the Series B and increased our ownership substantially by doing that. On the Series C, Rob Kalin decided to get an outside lead and we were totally supportive of that decision. In both cases, I expect (or know) that the VCs had a better idea of how things were going (well!!!!) than anyone outside of the company.

There was a meme in the comment thread on my post last week (104 comments) about “insider trading”. I’d like to say something about that without getting legal or technical. In my view, insider trading is taking advantage of someone buying a stock from you or someone selling stock to you when you know something that they do not. It is illegal and should be. Purchasing stock from a portfolio company is unlikely to be insider trading because how can anyone suggest that you know more about a company than the company knows about itself? I guess that’s possible, but it’s a hard argument to make with a straight face. So while this insider lead thing may smell to some as insider trading, I am very confident it is nothing of the sort.

So in summary, when you have conviction that one of your investments is doing really well, you should have the courage to offer to lead an inside round (assuming you have sufficient capital including future reserves to do that). You should make the case to the entrepreneur and the board why that is a good idea. And if they decide to go outside and find a new lead, you should support that decision and do everything you can to make that strategy a success. I don’t think enough VCs do this and I think they should.

by Fred Wilson at July 28, 2014 11:09 AM

Portland Pattern Repository

/r/netsec

StackOverflow

PC-BSD as java development environment? [on hold]

I am considering setting up java development environment on PC-BSD workstation?one of my primary reason for using this OS is because it provides ZFS option along with UFS during installation.PC_BSD also provides native freeBSD tools in X window form.My hardware has 8 gigs of RAM and ZFS will leverage the resources efficiently.But on the other side how does PC-BSD fair up with Oracle jdk?if not then what about other alternatives like openIndiana or Oracle solaris? or should I stick with linux ?

by rihbyne at July 28, 2014 10:38 AM

/r/emacs

recentf overwriting recentf-list

I am using recentf to maintain a list of recently opened files. This was working fine for me using the default settings. Then I changes the value of recent-save-file and it is not working.

Emacs updates the save-file correctly but when I start emacs again the value of recent-list is set to nil.

My config is below. Thanks for any help you can give.

(use-package recentf :bind ("C-x C-r" . recentf-open-files) :init (progn (recentf-mode t) (setq recentf-max-menu-items 25) (setq recentf-save-file "~/.emacs.d/.recentf") )) 
submitted by amyannick
[link] [6 comments]

July 28, 2014 10:34 AM

StackOverflow

ansible sudo_user hangs for a few minutes and then fails (in a centos6.5.1 vagrant vm)

I have these two simple tasks :

- name: I am 
  shell: "echo `id`"

- name: say hello
  shell: echo "postgres saying hello"
  sudo_user: postgres

The second task fails after a long pause, the output is below (it's runing with vagrant with the verbose level vvv) (yes I have verified that the user postgres exists, I can do a sudo su postgres from inside the VM)

TASK: [postgresql | I am] ***************************************************** 
changed: [192.168.78.6] => {"changed": true, "cmd": "echo `id` ", "delta": "0:00:00.002511", "end": "2014-01-23 22:49:14.161249", "item": "", "rc": 0, "start": "2014-01-23 22:49:14.158738", "stderr": "", "stdout": "uid=0(root) gid=0(root) groups=0(root)"}

TASK: [postgresql | say hello] ************************************************ 
fatal: [192.168.78.6] => failed to parse: [sudo via ansible, key=fnfgfnxabemrzbfixwgoksvgjrfzplxf] password: 


FATAL: all hosts have already failed -- aborting

The thing runs in a centos6.5.1 vagrant vm

by Max L. at July 28, 2014 10:22 AM

How would one do dependency injection in scala?

I'm still at the beginning in learning scala in addition to java and i didn't get it how is one supposed to do DI there? can or should i use an existing DI library, should it be done manually or is there another way?

by Fabian at July 28, 2014 10:08 AM

Can't execute the Install manager file

I am trying to re-install Tizen Wearable IDE, the problem is when I execute the binary file "tizen-wearable-sdk-2.2.150_ubuntu64.bin" I got this message:

OpenJDK is not supported. Try again with Oracle JDK.

What is the problem?

by user1444393 at July 28, 2014 10:05 AM

Fefe

Die elektronische Gesundheitskarte verzögert sich. ...

Die elektronische Gesundheitskarte verzögert sich. NEIN! DOCH! OH!

Mit der Gesundheitskarte ist das wie mit der Kernfusion. Kann sich nur noch um wenige Jahre handeln!

July 28, 2014 10:02 AM

StackOverflow

Check for acceptance of type, rather than value, with isDefinedAt

I have a case where I want use isDefinedAt to check if a partial function accepts a type, rather than a specific value.

val test: PartialFunction[Any, Unit] = { 
  case y: Int => ???
  case ComplexThing(x, y, z) => ??? 
}

Here you could do something like test isDefinedAt 1 to check for acceptance of that value, however, what I really want to do is check for acceptance of all Ints (more specifically, in my case the type I want to check is awkward to initialize (it has a lot of dependencies), so I would really like to avoid creating an instance if possible - for the moment I'm just using nulls, which feels ugly). Unfortunately, there is no test.isDefinedAt[Int].

I'm not worried about it only accepting some instances of that type - I would just like to know if it's completely impossible that type is accepted.

by Lattyware at July 28, 2014 10:00 AM

Lobsters

StackOverflow

How to start Clojure REPL from anywhere?

I can start Clojure REPL from directory when it was unpacked (C:\Program Files\clojure-1.6.0) by this command in command line:

java -cp clojure-1.6.0.jar clojure.main

but anytime I want to start REPL I have to enter directory C:\Program Files\clojure-1.6.0, so I create bat file with next content:

java -cp C:\Program Files\clojure-1.6.0\clojure-1.6.0.jar clojure.main

and put it to directory wich includes in PATH variable. I expect that it will run Clojure REPL, but instead of it I get an error

Error: Could not find or load main class Files\clojure-1.6.0\clojure-1.6.0.jar

And I don't find in internet how to fix it. Please help.

by yyk at July 28, 2014 09:53 AM

/r/compsci

What is NP-Hard?

I always see algorithms or optimization problems called NP Hard or P Hard(?). What are these? Do they have to do with the difficulty of the problem? How is it measured? And what is the topic name so I can learn more about it?

submitted by AJ_M
[link] [51 comments]

July 28, 2014 09:46 AM

QuantOverflow

How to deal with extreme cases in normal random numbers generation?

In order to generate normal random numbers, one usually generates random numbers following a uniform distribution $Z \sim \mathcal{U}(0,1)$ and then applies the reverse CDF function on them $X=\Phi^{-1}(Z) \sim \mathcal{N}(0,1)$.

However, I encountered a problematic case when one of the generated $Z$ turns out to give exactly 0. Then, you have $X=\Phi^{-1}(Z)=- \infty$.

This is pretty problematic when you generated random samples because it will usually break all your variance/covariance measure basically returning nan or inf when the samples contain infinite number.

How do you usually handle this? Do you check after each generated random number whether the value is 0 or 1 and shift it slightly (or simply dicard it)?

by SRKX at July 28, 2014 09:37 AM

Lobsters

StackOverflow

Why no output from qx(ssh ...)?

If I in Bash do

a=$(ssh 10.10.10.46 ifconfig)

then I see the output in $a, but if I in Perl do

my @a = qx(ssh 10.10.10.46 ifconfig);
print Dumper @a;

then I don't get the output. I have ssh keys, so no login required.

For now would I just like to get simple output, but later I want to pipe from the remote host to the local host all in bash. Will be used for ZFS replication.

Question

Why don't I see the output in Perl?

by Jasmine Lognnes at July 28, 2014 09:14 AM

Fefe

Umfrage: 86.5% der Israelis sind gegen eine Waffenruhe.Die ...

Umfrage: 86.5% der Israelis sind gegen eine Waffenruhe.

Die Fragen waren allerdings lenkend:

When asking about a potential cease-fire, the poll gave two choices. The first endorsed a ceasefire because “Israel had enough achievements, soldiers have died, and it is time to stop.” The second said Israel cannot accept a cease-fire because “Hamas continues firing missiles on Israel, not all the tunnels have been found, and Hamas has not surrendered.”

July 28, 2014 09:02 AM

StackOverflow

Spray IO, add header to response

I have (formerly) REST spray.io webservice. Now, I need to generate SESSIONID in one of my methods to be used with some other methods. And I want it to be in the response header.

Basically, I imagine logic like the following:

 path("/...") {
   get {
     complete {
       // some logic here
       // .....
       someResult match {
         case Some(something) =>
           val sessionID = generateSessionID
           session(sessionID) = attachSomeData(something)
           // here I need help how to do my imaginary respond with header
           [ respond-with-header ? ]("X-My-SessionId", sessionID) {
             someDataMarshalledToJSON(something)
           }


         case None => throw .... // wrapped using error handler
       }
     } 
   }
 }

But, it doesn't work inside complete, I mean respondWithHeader directive. I need an advice.

by dmitry at July 28, 2014 09:00 AM

Apache Spark: distinct doesnt work?

Here is my code example:

 case class Person(name:String,tel:String){
        def equals(that:Person):Boolean = that.name == this.name && this.tel == that.tel}

 val persons = Array(Person("peter","139"),Person("peter","139"),Person("john","111"))
 sc.parallelize(persons).distinct.collect

It returns

 res34: Array[Person] = Array(Person(john,111), Person(peter,139), Person(peter,139))

Why distinct doesn't work?I want the result as Person("john",111),Person("peter",139)

by MrQuestion at July 28, 2014 08:56 AM

REST API with Akka in Java

I am trying to create my own REST-based API using Java and Akka. I have created my main algorithmic implementation using Akka already. My confusion is coming in the form of how to implement the REST part of this. Most examples and libraries I have seen are specifically for Scala, which I am at the moment trying to stay away from.

I see Spray is a good way to go, but I see it's supposed to be for Scala. However, I know Scala compiles down to Java Byte Code and Java should be able to call Scala and visa versa. Is it possible to do this with Spray? If so, are there any working examples or tutorials online? I am not having any luck anywhere.

Thanks for your help and time.

by marothisu at July 28, 2014 08:08 AM

Portland Pattern Repository

DataTau

QuantOverflow

CVA number used by Finance Team

What are different reasons, Finance Team will need CVA number for? Is there any specific regulatory reporting to be done?

by Saurabh at July 28, 2014 07:59 AM

/r/compsci

How would you classify this problem?

I'm currently working on a project that requires suggesting dates/times for activities based on a user's current schedule (what available blocks of time they have), time of day, day of week, etc vs type of activity (e.g., you're probably less like to exercise in the middle of the day if you're working). It should also take into account past responses from the user, i.e., whether or not they've followed that recommendation previously.

I guess in the broadest terms this is an optimization problem. I imagine you could define possible dates/times with a cost function. Beyond that I can't find any examples of a specific tool (e.g., simulated annealing, genetic algorithms) being used for this type of problem. I'm not interested in a particular implementation or available libraries as much as I'd like to know the best way to think about this.

Has anyone come across something similar to this? Can you suggest a really high level way to tackle this? (Please, nothing step by step, just a starting point.)

Thanks!

submitted by lattejed
[link] [1 comment]

July 28, 2014 07:29 AM

StackOverflow

Rational functional tester playing back error

I had recorded a file using rational functional tester. while playing back one text box is not getting filled and the script fails. I tried selecting a different property but the field had the property of only masked field. I use hybrid framework?

by user3751571 at July 28, 2014 07:25 AM

Using Nightcode IDE with Clojure

I just started to learn Clojure in my free time and for fun. I installed Leiningen and I have the REPL working on the Windows command. But I wanted to use an IDE and downloaded Nightcode . But I am having problems with lack of Java experience and lack of documentation.

I tried to read Leiningen documentation but that did not make much sense either. I know I am not understanding the basics. When I click run on Nightcode I just get this result

Running...
Compiling my-project.core
Hello, World!

I know this is a very newbee question but can anyone direct me to the right place about how to enter some Clojure functions and run it and see the result in Nightcode? I was expecting to see something as easy as Python IDLE but this is very different.

Thanks.


Re Jared314's answer:

Now I have this code on the top window:

(ns newclojureproject.core
  (:gen-class))

(+ 1 2 3)

I click Run with REPL and I don't see the result for sum but this

my-first-project.clj=> Running...
Compiling my-first-project.clj
Running with REPL...
nREPL server started on port 51595 on host 127.0.0.1
REPL-y 0.2.1
Clojure 1.5.1
    Docs: (doc function-name-here)
          (find-doc "part-of-name-here")
  Source: (source function-name-here)
 Javadoc: (javadoc java-object-or-class-here)
    Exit: Control+D or (exit) or (quit)

my-first-project.clj=> Running...

=== Finished ===

=== Finished ===

What am I doing wrong?

by Zeynel at July 28, 2014 07:05 AM

Is ambiguous implicit value the only way we want to make the error existed in compilation time

trait Foo

trait Bar extends Foo

def doStuff[T <: Foo](x: T)(implicit ev: T =!:= Foo) = x

doStuff(new Foo{}) //ambiguous implicit value
doStuff(new Bar)// successful

Implicit resolution is happening on compilation time, so in here I think there may be two implicit value with exactly same type to trigger ambiguous stuff.

Right now, I am going to introduce shapeless into the team, my colleagues think this ambiguous implicit is not ideal, and I dont have strong argument about it. Is this only way to do it in order to make type safe in scala. If it is, what can I do to customize the error message ?

Edit:

In the shapeless, I want to make the sum of 2 NAT not equal to 7, I can code like this to make compilation fail.

def typeSafeSum[T <: Nat, W <: Nat, R <: Nat](x: T, y: W)
         (implicit sum: Sum.Aux[T, W, R], error: R =:!= _7) = x

typeSafeSum(_3, _4)

but the error message is ambiguous implicit value, how can I customize the error message ?

by Cloud tech at July 28, 2014 06:35 AM

Scala implicit search depth [duplicate]

This question already has an answer here:

A simple example:

class A
class B
class C

object testobject {
  val a = new A
  implicit def b(a:A):B = new B
  implicit def c(b:B) = new C
  val b:B = a
  val c:C = a 
}

The last line doesn't compile. We have A=>B and B=>C implicit conversions defined but that doesn't infer A=>C.

It would be really nice to be able to have layers of implicit conversions work.

My particular problem. too long to post fully is actually from a web framework. I want to do something like:

A => Secure[A] => Format[A]

with the following

implicit def secure[A](a:A):Secure[A] = ???
implicit def format[A](sec:Secure[A]):Format[A] = ???

So I want to handle security and formatting through implicit magic, and only secured outputs can be formatted.

Has anybody found any tricks to make this, or something like this work?

by triggerNZ at July 28, 2014 06:05 AM

CompsciOverflow

Is there a vanishing gradient in RNN training? [on hold]

One of the often cited issues in recurrent neural network training is the vanishing gradient problem [1,2,3,4].

However, I came across several papers by Anton Maximilian Schaefer, Steffen Udluft and Hans-Georg Zimmermann (e.g. [5]) in which they claim that the problem doesn't exist even in a simple RNN, if shared weights are used.

So, which one is true - does the vanishing gradient problem exist or not?


  1. Learning long-term dependencies with gradient descent is difficult by Y.Bengio et al. (1994)

  2. The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions by S.Hochreiter (1997)

  3. Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies by S.Hochreiter et al. (2003)

  4. On the difficulty of training Recurrent Neural Networks by R.Pascanu et al. (2012)

  5. Learning long-term dependencies with recurrent neural networks by A.M. Schaefer et al. (2008)

by qwer1304 at July 28, 2014 06:03 AM

StackOverflow

Import Java library to Scala sbt project

Im trying to use import guava library in SBT project (play framework), but cant compile my code

import com.google.common.net.InternetDomainName

class MyClass(link: String) {
  private val domains = {
    val host = new URL(link).getHost
    val domainName = InternetDomainName.from(host)
    domainName.topPrivateDomain().name()
  }
}

but I get compilation error

object google is not a member of package com

Can anyone explain, what is the problem?

by pozharko at July 28, 2014 05:30 AM

Planet Clojure

Better Abstractions With core.async

core.async is a  Preview Text:  I’ve been a big fan of Clojure’s core.async library since I first heard about it, and have been eagerly using it in a number of ways. I see core.async as fulfilling two roles: enabler and simplifier. Legacy Sponsored:  unsponsored ...

by Javalobby at July 28, 2014 05:30 AM

/r/compsci

How can I get started with data analysis?

There is so much data in this information age. There is dbpedia, quandl, freebase for crying out loud! All of this data and I don't know what to do with it. I've been coding for a couple years now and have mainly been doing web development, getting data from apis and displaying it for an end user, made pretty by some template. Got me a good job and everything. But this is really elementary stuff! I want to do data mining and analysis and do something with all that data. Find meaning.

How do I start out?

For the record I am behind in math (think precal). I'm attending a community college as a sophomore and working full-time as a web developer. I am looking for something to get me started in data analysis while I can study calculus and stats and linear algebra alongside and, later on, get into the more complex things like data mining, machine learning and AI.

What would you all recommend?

submitted by bidathrowaway
[link] [1 comment]

July 28, 2014 05:29 AM

StackOverflow

How to find the number of (key , value) pairs in a map in scala?

I need to find the number of (key , value) pairs in a Map in my scala code. I can iterate through the map and get an answer but I wanted to know if there is any direct function for this purpose or not. Thanks in advance! :)

by user3851565 at July 28, 2014 05:18 AM

StringEscapeUtils.escapeJava

I am getting \u1F44A\u1F44A where i am expecting \ud83d\udc4d\ud83d\udc4d.

import org.apache.commons.lang3.StringEscapeUtils

val data="👍👍"

println(StringEscapeUtils.escapeJava(data))//\u1F44A\u1F44A

println(StringEscapeUtils.unescapeJava("\u1F44A\u1F44A"))//ὄAὄA

println(StringEscapeUtils.unescapeJava("\ud83d\udc4d\ud83d\udc4d"))//👍👍

how i get this \ud83d\udc4d\ud83d\udc4d ?

by Govind Singh Nagarkoti at July 28, 2014 05:05 AM

Idiomatic alternative to `if (x) Some(y) else None`

I'm finding the following pattern popping up repeatedly in my code, and my intuition says there must be some idiomatic Scala way to better express this (Monadic or otherwise):

val someCollection: Seq[Thing] = ...
val makeBlah: Seq[Thing] => Blah = ...
...
if (someCollection.nonEmpty) Some(makeBlah(someCollection)) else None

To be more specific, I'm looking for something along the lines of what you can do with Option[T]:

val someOption: Option[Thing] = ...
val makeBlah: Thing => Blah = ...
...
val result: Option[Blah] = someOption.map(makeBlah)

...but with evaluation semantics based on some predicate rather than Some/None pattern matching in map.

While the example above uses a collection--first performing a test on it, optionally followed by an operation--I don't mean to imply a collections specific use case. You could imagine a case where Boolean is lifted or coerced into some monad:

val aThing: Thing = ...
val makeBlah: Thing => Blah = ...
val thingTest: Thing => Boolean ...
// theoretical
implicit def optionOnBoolean(b: Boolean): MonadOps[Option[Boolean]] = ... 
...
// NB: map could either have a Boolean parameter
//     that's always true, or be Unit.
//     Neither seem like good design 
val result: Option[Blah] = thingTest(aThing).map(makeBlah(aThing))

Intuitively this seems like a bad idea to me because it explicitly splits the data flow since you don't really have anything to pass via map.

When looking for a general approach that has "monadic-like" behavior without a closure to capture data, one has to answer the question of what to pass to map and how its connection to the predicate. Here's the type of construct that comes to mind:

val thing: Thing = ....
val makeBlah: Thing => Blah = ...
val thingTest: (Thing) => Boolean = ...
val result: Option[Blah] = WhenOption(thing, thingTest).map(makeBlah)

My question: Does something already exist in Scala proper, or does one have to venture out to Scalaz to get this sort of construct?

Or is there some other approach that is customary/idiomatic Scala?

Edit: My question is close to Scala - "if(true) Some(1)" without having to type "else None" but I wish to address the issue of achieving it without a closure.

by Simeon Fitch at July 28, 2014 04:52 AM

How to use Lambda Expressions in Java 8? [closed]

How should we use Lambda Expressions . I had read much about it.But I don't understand completely. And I have a main question. "We can use Lambda just in functional Interfaces." Is it right ?

by faraa at July 28, 2014 04:51 AM

sbt.IO zip how to preserve file permission

I recently created sbt task to do the deployment stuff which include shell script. In the sbt task, I am using sbt-IO to zip the shell script file.

However, it looks like shell script inside zip file doesnt preserve the file permission. I know java.util.zip doesnt preserve permission as well but Gradle did, so what else solutions I have in sbt project if I still want to use sbt-io and I dont want to use command line zip plugin library like sbt-native-package

by Cloud tech at July 28, 2014 04:50 AM

/r/netsec

StackOverflow

How to expand macro within other macro's scope (trying to debug a macro)

here is the simplest example I could make:

(defmacro printer [& forms]
  `(println ~@forms))

(defmacro adder [s]
  `(inc ~s))

They can be used as expected:

(printer "haha")
=> "haha"

(adder 1)
=> 2

And I can macroexpand them to see what the macro did:

(macroexpand '(printer 1))
=> (clojure.core/println 1)

(macroexpand '(adder 1))
=> (clojure.core/inc 1)

But when they are nested I don't get what I want:

(macroexpand '(printer (adder 1)))
=> (clojure.core.println (adder 1))

I was hoping to get

=> (clojure.core.println (clojure.core/inc 1))

Is there any way for me to expand nested macros? That would help me a lot in debugging a specific bug.

by mascip at July 28, 2014 04:49 AM

Is there a way in Haskell to express a point free function to edit the property of a data type?

I have the type:

data Cons = Cons {myprop :: String}

and later on I'm mapping over a list, setting the property to a different value:

fmap (\x -> x{myprop = ""}) mylist

Is there a point free way to express this anonymous function?

by Andras Gyomrey at July 28, 2014 04:49 AM

Methods vs functions: unexpected unit return value

Why does the method below return a unit value (i.e., ()) when the equivalent function returns a boolean (as expected)?

// aMethod(1) returns ()
def aMethod(a: Int) { true }

// aFunction(1) returns true
val aFunction = (a: Int) => true

by Daniel at July 28, 2014 04:49 AM

Can I make a fully non-blocking backend application with http-kit and core.async?

I'm wondering if it's possible to put together a fully non-blocking Clojure backend web application with http-kit.

(Actually any Ring-compatible http server would be fine by me; I'm mentioning http-kit because it claims to have an event-driven, non-blocking model).

If I got it right (and I'm not an expert, so please tell me if I'm working on wrong assumptions), the principles of such a non-blocking model for a web application are the following:

  1. Have a few super-fast OS threads handle all the CPU-intensive computing; these must never be waiting.
  2. Have a lot of "weak threads" handle the IO (database calls, web-service calls, sleeping, etc.); these are meant mostly to be waiting.
  3. This is beneficial because the waiting time spent on the handling of a request is typically 2 (disk access) to 5 (web services calls) orders of magnitude higher than the computing time.

From what I have seen, this model is supported by default on the Play Framework (Scala) and Node.js (JavaScript) platforms, with promise-based utilities for managing asynchrony programmatically.

Let's try to do this in a Ring-based clojure app, with Compojure routing. I have a route that constructs the response by calling the my-handle function:

(defroutes my-routes
  (GET "/my/url" req (my-handle req))
  )
(def my-app (noir.util.middleware/app-handler [my-routes]))
(defn start-my-server! [] 
  (http-kit/run-server my-app))

It seems the commonly accepted way of managing asynchrony in Clojure applications is CSP-based, with the use of the core.async library, with which I'm totally fine. So if I wanted to embrace the non-blocking principles listed above, I'd implement my-handle this way :

(require '[clojure.core.async :as a])

(defn my-handle [req]
  (a/<!!
    (a/go ; `go` makes channels calls asynchronous, so I'm not really waiting here
     (let [my-db-resource (a/thread (fetch-my-db-resource)) ; `thread` will delegate the waiting to "weaker" threads
           my-web-resource (a/thread (fetch-my-web-resource))]
       (construct-my-response (a/<! my-db-resource)
                              (a/<! my-web-resource)))
     )))

The CPU-intensive construct-my-response task is performed in a go-block whereas the waiting for external resources is done in thread-blocks, as suggested by Tim Baldridge in this video on core.async (38'55'')

But that is not enough to make my application non-blocking. Whatever thread goes through my route and will call the my-handle function, will be waiting for the response to be constructed, right?

Would it be beneficial (as I believe) to make this HTTP handling non-blocking as well, if so how can I achieve it?

by Valentin Waeselynck at July 28, 2014 04:48 AM

openjdk 1.7 in eclipse: operator is not allowed for source level below 1.7

Eclipse gives me an error:

'<>' operator is not allowed for source level below 1.7 

I guess this is because it is not using java 1.7. Except that it is. At least openjdk 1.7 (my OS is OpenSuse 12.3).

I switched back from kepler to juno to reduce some lags and try to figure out this bug as well, to no avail so far.

Some things I have tried: - the default runtime for eclipse is opensdk 1.7 (says so in help, about, installation details) - project properties, java build -> library. I have manually added the opensdk location.

I would install the oracle version, but there is only 1.6 available from the opensuse repository. I already tried installing the rpm offered by oracle, that didn't put itself in my path and kind of messed everything up, so I removed that again.

It should work with openjdk as well no? Or do you think it has a bug?

ps: junit also was not recognised, so I manually linked to the jar file. Perhaps this is relevant information.

by dorien at July 28, 2014 04:48 AM

Functional way to evaluate boolean for a function that throws an exception

I'm trying to write the following function without using var and only val. Any ideas how to approach this ?

  def isValidBSONId(id: String): Boolean = {
    var valid: Boolean = false
    import reactivemongo.bson.utils.Converters._
    try {
      str2Hex(id)
      valid = true
    } catch  {
      case _ => valid = false
    }
    valid
  }

by Soumya Simanta at July 28, 2014 04:47 AM

Compare more than 2 conditions in object

This is more of a programming / logical question. can be answered in any programming language (i expect C/C++/Java/python, others i may not understand).

for(int i = 0 ; i < partsModelMasterList.size() ;i++)
            {
                PartsModel partsModel = partsModelMasterList.get(i);

                // compare model
                if(filterModel != null &&  partsModel.getPartModel().equalsIgnoreCase(filterModel))
                {
                    partsModelList.add(partsModel);
                }
                // compare product name
                else if(filterProduct != null && partsModel.getPartName().equalsIgnoreCase(filterProduct))
                {
                    partsModelList.add(partsModel);
                }
                // compare filterDescription
                else if(filterDescription != null && partsModel.getPartSpecs().contains(filterDescription))
                {
                    partsModelList.add(partsModel);
                }
            }

Here as you can see i compare each field differently.

So when any of the field matches the criterion it gets added to the datastructure. which i dont want unless it fulfills others too. But at the same time user may leave some fields blank which makes some of them null.

suppose it fulfils 1 but does not fulfil 2 it should be rejected.

What will be the best possible solution to the problem. Or i'll have to make 9(or whatever, im not good at maths) ifs for 3 compares.

by Shash at July 28, 2014 04:18 AM

Why aren't my if statements working?

I'm extremely new to Clojure and very new to functional programming. I'm expecting this to return True or False but it's just infinitely recursing and it doesn't seem to be hitting true at all.

My test data set is this:

(def y (list 1 2 3 4)) ; and I'm passing in 2 for X.


(defn occursIn [x y]
    (if (= y nil) 
        "False"
        (if (= x first y )
            "True"  
            (occursIn x (rest y))
        )
    )

)

by Agent 404 at July 28, 2014 04:17 AM

Passing optional callback into Swift function

I'm learning Swift lang, but I cannot pass optional callback argument into function:

func dismiss(completion: () -> Void) {
    if (completion) {
        return self.dismissViewControllerAnimated(true, completion: completion)
    }
    self.dismissModalViewControllerAnimated(true)
}

This shows me an error - Type () -> Void does not conform to protocol 'LogicValue'

Any suggestions?

by Kosmetika at July 28, 2014 04:16 AM

How to map a string to list of characters

Given a string "my_string", how do I convert this to a list of Strings: List("m", "y", "_"...) containing the component characters

by user3786300 at July 28, 2014 04:16 AM

Define function for extension of abstract class

I'm having trouble with type mismatches when trying to write a function that takes as input (and output) an object that extends an abstract class.

Here is my abstract class:

abstract class Agent {
  type geneType
  var genome: Array[geneType]
}

Here is my function:

def slice[T <: Agent](parentA: T, parentB: T):(T, T) = {
  val genomeSize = parentA.genome.length

  // Initialize children as identical to parents at first. 
  val childA = parentA
  val childB = parentB

  // the value 'index' is sampled randomly between 0 and 
  // the length of the genome, less 1.  
  // This code omitted for simplicity. 
  val index;
  val pAslice1 = parentA.genome.slice(0, index + 1)
  val pBslice1 = parentB.genome.slice(index + 1, genomeSize)
  val genomeA = Array.concat(pAslice1, pBslice1)
  childA.genome = genomeA

  // And similary for childB. 
  // ...
  // ...

  return (childA, childB)
}

I'm receiving an error (I'm running this with sbt, by the way) as follows:

[error] ..........  type mismatch;
[error]  found   : Array[parentA.geneType]
[error]  required: Array[T#geneType]

I'm not sure what the problem is, as I'm new to abstract classes, generic type parametrization, and probably other relevant concepts whose names I don't know.

by sinwav at July 28, 2014 04:16 AM

Type alias vs extension of abstract class

Note: this post refers to define function for extension of abstract class

I have an abstract class defined below:

abstract class Agent {
  type geneType
  val genome: Array[geneType]
  implicit def geneTag: reflect.ClassTag[geneType]
  def copy(newGenome: Array[geneType]): AgentT[geneType]
}
object Agent { type Typed[A] = Agent { type geneType = A }}

I've defined a function that operates on objects of type Typed[A]. But I get errors when I try to call that function on an object that extends the abstract class.

Here is the function (thanks to S.O. user @0__):

import Agent._

def slice[A](parentA: Typed[A], parentB: Typed[A]): (Typed[A], Typed[A]) = {
  val genomeSize = parentA.genome.length
  require (parentB.genome.length == genomeSize)
  import parentA.geneTag

  val index    = (math.random * genomeSize + 0.5).toInt
  val (aInit, aTail) = parentA.genome.splitAt(index)
  val (bInit, bTail) = parentB.genome.splitAt(index)
  val genomeA  = Array.concat(aInit, bTail)
  val genomeB  = Array.concat(bInit, aTail)
  (parentA.copy(genomeA), parentB.copy(genomeB))
}

But I get an error when I call the function with an object that extends Agent. This extension is defined below:

case class Prisoner(initGenome: Array[Boolean]) extends Agent {
  type geneType = Boolean
  val genome = initGenome
  def geneTag = implicitly[reflect.ClassTag[Boolean]]
  def copy(newGenome: Array[geneType], memSize:Int):AgentT[Boolean] = new Prisoner(newGenome:Array[Boolean], memSize: Int)
}

The code that causes the error is something like:

slice[Boolean](parentA:Prisoner, parentB:Prisoner)

The error is

[error]  found: Prisoner
[error]  required: Agent.Typed[Boolean]

Any guidance here? It seems like these types should be equivalent.

by sinwav at July 28, 2014 04:15 AM

Why should one prefer Option for error handling over exceptions in Scala?

So I'm learning functional Scala, and the book says exception breaks referential transparency, and thus Option should be used instead, like so:

def pattern(s: String): Option[Pattern] = {
  try {
    Some(Pattern.compile(s))
  } catch {
    case e: PatternSyntaxException => None
  }
}

This seems pretty bad; I mean it seems equivalent to:

catch(Exception e){
    return null;
}

Save for the fact that we can distinguish "null for error" from "null as genuine value". It seems it should at least return something that contains the error information like:

catch {
    case e: Exception => Fail(e)
}

What am I missing?

by BasilTomato at July 28, 2014 04:13 AM

How to functionally Join multiple Deedle series in C#?

i am thinking to use deedle to join hundreds of series into a frame. what is the best functional way to achieve this?

The immediate (imperative) thought is to create a frame object holder outside of the loop. then within the loop this object older is used as the left hand side of the series join.

on a second thought, C# tail recursion? i have done some research i am a bit lost as whether c# can do tail recursion. There is only F# example from Tomas' book 'functional programming for real world'

Also has anyone had deedle frame with hundreds of columns (1000 rows)? is there a big performance impact? this may sound excessive but it is done in spreadsheet quite commonly

Any suggestion is welcome. Thank you casbby

by casbby at July 28, 2014 04:11 AM

/r/compsci

Student Seeking Internship With GameDev/CompSci Related Employers

Hi, I am a sophomore student studying computer science and plan on pursuing a specialty in Game Systems/Design.

I'm wondering if anybody knows any studios or other software companies willing to hire a co-op student from a prestigious university (Kettering University).

Noteworthy skills are a proficiency with C, Java, and MATLAB languages with advanced knowledge of C++, as well as familiarity with Unreal 4, Unity (2D), and LOVE game engines.

These along with other talents I hold are in my resume, which I can send if needed.

I would really love to be able to break into the world of game design or software development early and I believe my position at Kettering gives me the perfect opportunity to do this. Kettering works on a 3-month rotation, so I'd go to school for 3 months and then work for 3 months before repeating the pattern, allowing me to acquire large amounts of experience while being able to maximize my education.

Any and all help is very appreciated, thank you!

submitted by Narfii
[link] [1 comment]

July 28, 2014 04:10 AM

StackOverflow

`mode` option in ansible synchronize does not work

I recently set up an ansible role with the task:

- name: "synchronize source"
  sudo: yes
  synchronize:
    src: "../../../../" # get source dir
    dest: "{{ app.user.home_folder }}/{{ app.name }}"
    mode: 700

Unfortunately, upon inspection, the transferred files have -rw-r--r--. Not a big deal, as I have set up another task to chmod the files, but I am wondering why this is.

by Jehan at July 28, 2014 04:07 AM

Portland Pattern Repository

StackOverflow

Can you explain closures (as they relate to Python)?

I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them.

by knowncitizen at July 28, 2014 04:00 AM

/r/netsec

XKCD

StackOverflow

How to define a function that output another function?

I want to define a function that takes some arguments as input, and uses them to make another function, then outputs the new function.

For example:

makeIncrease(n) --> return a function that takes an argument, and return (argument + n)

applyIncrease(increaseFn, m) --> will apply increaseFn to argument m

So if I do this: applyIncrease(makeIncrease(n), m) --> will return m+n

How can I do it in python?

by Lucasian at July 28, 2014 03:50 AM

Why does "use" in Clojure call "in-ns" to return to the original namespace?

In Clojure use loads a lib and in addition refers to the lib's namespace.

load does not change the current namespace.

Then what is the purpose of the in-ns command that is implicitly called when a lib is loaded with use?

user=> (use 'project.core :verbose)
(clojure.core/load "/project/core")
(clojure.core/in-ns 'user)
(clojure.core/refer 'project.core)

In other words, isn't (clojure.core/in-ns 'user) in the previous example unnecessary?

by reus at July 28, 2014 03:22 AM

Planet Clojure

Clojure Gazette 1.86

Clojure Gazette 1.86
X-Men, React, Zippers

Clojure Gazette

Issue 1.86 July 27, 2014


Editorial

Hello functional programmers,

First, I have to thank everyone who helped support the Clojure Gazette last week. There were a bunch of new subscribers. It made July the best month in almost a year.

I'm still experimenting with the Gazette. And it takes a lot of work. I've done it for over two years. It's rewarding, but I'd like to start getting a little back for it.

You'll notice this week that I have started putting sponsored links in the post. The first link is for a great organization that does not make a profit, giving free Clojure intro classes to increase the diversity of programmers. It's called ClojureBridge and I'm very proud to be able to give them this listing. ClojureBridge is a quality organization run by awesome people making an important change in the world. Show them some love by letting the world know how awesome they are, and telling them directly, too, on Twitter and contact form. Also tell anyone you know who may be interested in learning Clojure that fits their target.

I hope all of my sponsors can be as great as them. If you'd like to sponsor the Gazette and showcase your product or service, read the media kit and get in touch!

Rock on!

Sincerely,
Eric Normand <ericwnormand@gmail.com>

PS Please follow the DISCUSS links and comment!

React v0.11 Released


A new point release of React, the library behind Om, Reagent, and others. Performance improvements, rendering null, and many bug fixes. DISCUSS

Lisp in Summer Projects 2014 Winners


I was under a rock this year and missed the whole competition. But here are the winners! They look pretty cool, the judges are amazing, and it's a good cause. DISCUSS

Sponsored Link

ClojureBridge Melbourne


This workshop is intended to reach out to women who are interested in learning programming with Clojure. In this workshop, we'll take you through building a complete web application from scratch using Clojure.

lein-oneoff


A cool Leiningen plugin that aims to make it easier to create single-file projects. You declare the dependencies at the top of the file, then start your namespace declaration as normal. lein-oneoff takes care of fetching the dependencies. DISCUSS

Composing Test Generators


Have you ever wanted to write Prismatic Schemas and have it automatically create test.check generators? Well, somebody already did it in what looks like a very well-engineered way. This talk describes how it works and why. DISCUSS

Manifold


Described as a "compatibility layer for event-driven abstractions", this library has the ambitious goal of providing a unifying interface over Java BlockingQueues, Clojure lazy sequences, and core.async channels. DISCUSS

Sponsored Link

Sponsor the Clojure Gazette


The Clojure Gazette has fostered the inspiration of the Clojure community for over two years. Help support the Gazette by sponsoring a link, and get the exposure your product or service deserves. A link to your product could be right here, in front of thousands of Clojure developers. Download the media kit to learn more.

Clojure X-Men


A cute metaphor of the superpowers Clojure gives you. DISCUSS

Zippers Episode 1


Timothy Baldridge has been doing some screencasts, mostly about core.async mostly available for a small fee or a subscription. His screencasts are a different style from mine, so if you like his style better, go for it. This video is about zippers and is free. DISCUSS

Job Listings

Promote your job to the best functional programmers


If you want the best Clojure programmers, speak to the ones who read academic, pragmatic, and quality content every week in the Gazette. Download the media kit to learn more.
Copyright © 2014 LispCast, All rights reserved.


unsubscribe from this list    change email address   media kit (advertising information)

by Clojure Gazette at July 28, 2014 02:59 AM

/r/compsci

What skills should I know, for building a new NoSQL database from scratch?

I'm interested in building a database from scratch, as a hobby / learning project. I've been a software developer for a few years, but I haven't specifically looked into database design before. What are some best practices, or general skills / techniques that I should read more about?

Thanks.

submitted by momslatin_dadsasian
[link] [18 comments]

July 28, 2014 02:54 AM

CompsciOverflow

B-Tree and how it is used in practice

I understand what a B-Tree is (I already implemented a B-Tree in Java with insert and delete methods that preserve the invariant).

However I do not understand exactly how it is used for example for file systems or databases.

  1. How do you choose the keys in these 2 scenarios? And what is the data?

  2. Are data and keys the same?

  3. I can't think of a good key that is able to order the data in a way that it is easily accessible. If I want to find an entry containing some string.

  4. how useful would be a key that gives information about the size of each entry? or the alphabetical order? I don't exactly understand the concept.

I have to add that I don't understand much of file systems or databases which is probably why I'm confused.

by Nocta at July 28, 2014 02:35 AM

StackOverflow

How to generate a "qualified Select" using Scala macros?

I'm playing around with Scala macros. When reading examples, I often see this kind of pattern:

Select(
  Select(
    Ident(TermName("scala")), 
    TermName("Some")
  ), 
  TermName("apply")
)

That's quite verbose and repetitive. Is there any way to express this more concisely? I'm looking for something like:

select("scala.Some.apply")

by Lukas Eder at July 28, 2014 02:24 AM

CompsciOverflow

Guessing asymptotic complexity from benchmark data

What I want to do

Guess average case asymptotic complexity from benchmarks.

Approach

Consider a program which takes a set of data as input (for example, a simple list, filled with integers).

Assume the program is very complex and we don't know the internals of the program, but we know what it does. Also, we are allowed to run the program as often as we want, whereas we choose the size of the input set.

Now we do a lot of test runs with different sizes and remember the execution time for each test run.

From that data series, we could now infer a asymptotic complexity class for the given program, assuming we collected enough data.

Questions

What are the (obvious) caveats of such an approach?

How relevant could such an approach be for practical purposes?

by Emiswelt at July 28, 2014 02:06 AM

Planet Theory

The Change Problem and the Gap between Recreational and Serious Mathematics

In a prior post (here)I discussed the change problem: given a dollar, how many ways can you make change using pennies, nickels, dimes, and quarters (and generalize to n cents). Scouring the web I found either programs for it, the answer   (242), and answers like `is the coefficient of x^100 in ... . What I didn't find is a  formula for n cents. So I derived one using recurrences and wrote this up with some other things about the change problem, and I posted it on arXiv. (I have updated the paper many times after comments I got. It is still where it was, but updated, here.)

I then got some very helpful emails pointing me to the vast math literature on this problem. I was not surprised there was such a literature, though I was surprised I hadn't found any of it searching the web.
The literature didn't have my formula, though it had several ways to derive it (some faster than mine).

Why isn't the formula out there? Why couldn't I find the literature?

  1. The formula falls into a space right between recreational and serious math. I use a recurrence  but not a generating function. Recreational math just wants to know the answer for 100, so a program suffices (or some clever by-hand recurrences, which are also in my paper). Serious math people are happy to show that a formula CAN be derived and to refine that in terms of how quickly the coefficients can be found.
  2. There is a gap between recreational and serious math. In particular- the serious people aren't talking to (or posting on websites to) the rec people.
  3. Different terminologies. I was looking for ``change problems'' and ``coin problems'' and things like that when I should have been looking for ``Sylvester Denumerant'' or ''Frobenius problem''
For some areas Google (and other tools) are still not as good as finding someone who knows about your area. My posting on the blog got some people who know stuff's attention, and my posting on arXiv got one more person (who knew ALOT!).  I'm glad they emailed me comments and references so I improved the paper and cited the literature properly. But this seems so haphazard! Google didn't work,  mathstack exchange and other similar sites didn't work (that is where I saw the Rec people post but not get a formula). Is it the case that no matter how good Google gets we'll still need to find people who know stuff? Thats fine if you can, but sometimes you can't.

by GASARCH (noreply@blogger.com) at July 28, 2014 01:50 AM

arXiv Computer Science and Game Theory

On Manipulation in Prediction Markets When Participants Influence Outcomes Directly. (arXiv:1407.7015v1 [cs.GT])

Prediction markets are often used as mechanisms to aggregate information about a future event, for example, whether a candidate will win an election. The event is typically assumed to be exogenous. In reality, participants may influence the outcome, and therefore (1) running the prediction market could change the incentives of participants in the process that creates the outcome (for example, agents may want to change their vote in an election), and (2) simple results such as the myopic incentive compatibility of proper scoring rules no longer hold in the prediction market itself. We introduce a model of games of this kind, where agents first trade in a prediction market and then take an action that influences the market outcome. Our two-stage two-player model, despite its simplicity, captures two aspects of real-world prediction markets: (1) agents may directly influence the outcome, (2) some of the agents instrumental in deciding the outcome may not take part in the prediction market. We show that this game has two different types of perfect Bayesian equilibria, which we term LPP and HPP, depending on the values of the belief parameters: in the LPP domain, equilibrium prices reveal expected market outcomes conditional on the participants' private information, whereas HPP equilibria are collusive -- participants effectively coordinate in an uninformative and untruthful way.

by <a href="http://arxiv.org/find/cs/1/au:+Chakraborty_M/0/1/0/all/0/1">Mithun Chakraborty</a>, <a href="http://arxiv.org/find/cs/1/au:+Das_S/0/1/0/all/0/1">Sanmay Das</a> at July 28, 2014 01:30 AM

A Key Pre-Distribution Scheme based on Multiple Block Codes for Wireless Sensor Networks. (arXiv:1407.7011v1 [cs.CR])

A key pre-distribution scheme (KPS) based on multiple codewords of block codes is presented for wireless sensor networks. The connectivity and security of the proposed KPS, quantified in terms of probabilities of sharing common keys for communications of pairs of nodes and their resilience against colluding nodes, are analytically assessed. The analysis is applicable to both linear and nonlinear codes and is simplified in the case of maximum distance separable codes. It is shown that the multiplicity of codes significantly enhances the security and connectivity of KPS at the cost of a modest increase of the nodes storage. Numerical and simulation results are provided, which sheds light on the effect of system parameters of the proposed KPS on its complexity and performance. Specifically, it is shown that the probability of resilience of secure pairs against collusion of other nodes only reduces slowly as the number of colluding nodes increase.

by <a href="http://arxiv.org/find/cs/1/au:+Arjmandi_H/0/1/0/all/0/1">Hamidreza Arjmandi</a>, <a href="http://arxiv.org/find/cs/1/au:+Lahouti_F/0/1/0/all/0/1">Farshad Lahouti</a> at July 28, 2014 01:30 AM

Undecidability of the problem of recognizing axiomatizations for implicative propositional calculi. (arXiv:1407.7010v1 [math.LO])

In this paper we consider propositional calculi, which are finitely axiomatizable extensions of intuitionistic implicational propositional calculus together with the rules of modus ponens and substitution. We give a proof of undecidability of the following problem for these calculi: whether a given finite set of propositional formulas constitutes an adequate axiom system for a fixed propositional calculus. Moreover, we prove the same for the following restriction of this problem: whether a given finite set of theorems of a fixed propositional calculus derives all theorems of this calculus. The proof of these results is based on a reduction of the undecidable halting problem for the tag systems introduced by Post.

by <a href="http://arxiv.org/find/math/1/au:+Bokov_G/0/1/0/all/0/1">Grigoriy V. Bokov</a> at July 28, 2014 01:30 AM

RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. (arXiv:1407.6981v1 [cs.CR])

Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees. In short, RAPPORs allow the forest of client data to be studied, without permitting the possibility of looking at individual trees. By applying randomized response in a novel manner, RAPPOR provides the mechanisms for such collection as well as efficient, high-utility analysis of the collected data. RAPPOR permits statistics to be collected on the population of client-side strings with strong privacy guarantees for each client, and without linkability of their reports.

by <a href="http://arxiv.org/find/cs/1/au:+Erlingsson_%7B/0/1/0/all/0/1">&#xda;lfar Erlingsson</a>, <a href="http://arxiv.org/find/cs/1/au:+Korolova_A/0/1/0/all/0/1">Aleksandra Korolova</a>, <a href="http://arxiv.org/find/cs/1/au:+Pihur_V/0/1/0/all/0/1">Vasyl Pihur</a> at July 28, 2014 01:30 AM

Hardware extensions to make lazy subscription safe. (arXiv:1407.6968v1 [cs.PL])

Transactional Lock Elision (TLE) uses Hardware Transactional Memory (HTM) to execute unmodified critical sections concurrently, even if they are protected by the same lock. To ensure correctness, the transactions used to execute these critical sections "subscribe" to the lock by reading it and checking that it is available. A recent paper proposed using the tempting "lazy subscription" optimization for a similar technique in a different context, namely transactional systems that use a single global lock (SGL) to protect all transactional data. We identify several pitfalls that show that lazy subscription \emph{is not safe} for TLE because unmodified critical sections executing before subscribing to the lock may behave incorrectly in a number of subtle ways. We also show that recently proposed compiler support for modifying transaction code to ensure subscription occurs before any incorrect behavior could manifest is not sufficient to avoid all of the pitfalls we identify. We further argue that extending such compiler support to avoid all pitfalls would add substantial complexity and would usually limit the extent to which subscription can be deferred, undermining the effectiveness of the optimization. Hardware extensions suggested in the recent proposal also do not address all of the pitfalls we identify. In this extended version of our WTTM 2014 paper, we describe hardware extensions that make lazy subscription safe, both for SGL-based transactional systems and for TLE, without the need for special compiler support. We also explain how nontransactional loads can be exploited, if available, to further enhance the effectiveness of lazy subscription.

by <a href="http://arxiv.org/find/cs/1/au:+Dice_D/0/1/0/all/0/1">Dave Dice</a>, <a href="http://arxiv.org/find/cs/1/au:+Harris_T/0/1/0/all/0/1">Timothy L. Harris</a>, <a href="http://arxiv.org/find/cs/1/au:+Kogan_A/0/1/0/all/0/1">Alex Kogan</a>, <a href="http://arxiv.org/find/cs/1/au:+Lev_Y/0/1/0/all/0/1">Yossi Lev</a>, <a href="http://arxiv.org/find/cs/1/au:+Moir_M/0/1/0/all/0/1">Mark Moir</a> at July 28, 2014 01:30 AM

Distributed and Fair Beaconing Congestion Control Schemes for Vehicular Networks. (arXiv:1407.6965v1 [cs.NI])

Cooperative inter-vehicular applications rely on the exchange of broadcast single-hop status messages among vehicles, called beacons. The aggregated load on the wireless channel due to periodic beacons can prevent the transmission of other types of messages, what is called channel congestion due to beaconing activity. In this paper we approach the problem of controlling the beaconing rate on each vehicle by modeling it as a Network Utility Maximization (NUM) problem. This allows us to formally define the notion of fairness of a beaconing rate allocation in vehicular networks. The NUM model provides a rigorous framework to design a broad family of simple and decentralized algorithms, with proved convergence guarantees to a fair allocation solution. In this context, we propose the Fair Adaptive Beaconing Rate for Intervehicular Communications (FABRIC) algorithm, which uses a particular scaled gradient projection algorithm to solve the dual of the NUM problem. Simulation results validate our approach and show that, unlike recent proposals, FABRIC converges to fair rate allocations in multi-hop and dynamic scenarios.

by <a href="http://arxiv.org/find/cs/1/au:+Egea_Lopez_E/0/1/0/all/0/1">Esteban Egea-Lopez</a>, <a href="http://arxiv.org/find/cs/1/au:+Pavon_Marino_P/0/1/0/all/0/1">Pablo Pavon-Mari&#xf1;o</a> at July 28, 2014 01:30 AM

The DUNE-ALUGrid Module. (arXiv:1407.6954v1 [cs.MS])

In this paper we present the new DUNE-ALUGrid module. This module contains a major overhaul of the sources from the ALUgrid library and the binding to the DUNE software framework. The main improvements concern the parallel feature set of the library, including now user defined load balancing and parallel grid construction. In addition many improvements have been introduced into the code to increase the parallel efficiency and to decrease the memory footprint.

The original ALUGrid library is widely used within the DUNE community due to its good parallel performance for problems requiring local adaptivity and dynamic load balancing. Therefore this new model will benefit a number of DUNE users. In addition we have added features to increase the range of problems for which the grid manager can be used, for example, introducing a 3d tetrahedral grid using a parallel newest vertex bisection algorithm for conforming grid refinement. In this paper we will discuss the new features, extensions to the DUNE interface, and explain for various examples how the code is used in parallel environments.

by <a href="http://arxiv.org/find/cs/1/au:+Dedner_A/0/1/0/all/0/1">Andreas Dedner</a>, <a href="http://arxiv.org/find/cs/1/au:+Klofkorn_R/0/1/0/all/0/1">Robert Kl&#xf6;fkorn</a>, <a href="http://arxiv.org/find/cs/1/au:+Nolte_M/0/1/0/all/0/1">Martin Nolte</a> at July 28, 2014 01:30 AM

How,when and how much a card deck is well shuffled?. (arXiv:1407.6950v1 [cs.DM])

The thesis consider the mixing of few (3-4) card shuffling as well as of large (52 card) deck. The thesis is showing the limit on the shuffling to homogeneity elaborated in short program; the thesis is in italian.

by <a href="http://arxiv.org/find/cs/1/au:+Fargion_B/0/1/0/all/0/1">Benjamin Isac Fargion</a> at July 28, 2014 01:30 AM

OMP2HMPP: HMPP Source Code Generation from Programs with Pragma Extensions. (arXiv:1407.6932v1 [cs.DC])

High-performance computing are based more and more in heterogeneous architectures and GPGPUs have become one of the main integrated blocks in these, as the recently emerged Mali GPU in embedded systems or the NVIDIA GPUs in HPC servers. In both GPGPUs, programming could become a hurdle that can limit their adoption, since the programmer has to learn the hardware capabilities and the language to work with these. We present OMP2HMPP, a tool that, automatically trans-lates a high-level C source code(OpenMP) code into HMPP. The generated version rarely will differs from a hand-coded HMPP version, and will provide an important speedup, near 113%, that could be later improved by hand-coded CUDA. The generated code could be transported either to HPC servers and to embedded GPUs, due to the commonalities between them.

by <a href="http://arxiv.org/find/cs/1/au:+Saa_Garriga_A/0/1/0/all/0/1">Albert Sa&#xe0;-Garriga</a>, <a href="http://arxiv.org/find/cs/1/au:+Castells_Rufas_D/0/1/0/all/0/1">David Castells-Rufas</a>, <a href="http://arxiv.org/find/cs/1/au:+Carrabina_J/0/1/0/all/0/1">Jordi Carrabina</a> at July 28, 2014 01:30 AM

Accelerating Fast Fourier Transforms Using Hadoop and CUDA. (arXiv:1407.6915v1 [cs.DC])

There has been considerable research into improving Fast Fourier Transform (FFT) performance through parallelization and optimization for specialized hardware. However, even with those advancements, processing of very large files, over 1TB in size, still remains prohibitively slow. Analysts performing signal processing are forced to wait hours or days for results, which results in a disruption of their workflow and a decrease in productivity. In this paper we present a unique approach that not only parallelizes the workload over multi-cores, but distributes the problem over a cluster of graphics processing unit (GPU)-equipped servers. By utilizing Hadoop and CUDA, we can take advantage of inexpensive servers while still exceeding the processing power of a dedicated supercomputer, as demonstrated in our result using Amazon EC2.

by <a href="http://arxiv.org/find/cs/1/au:+Tsiomenko_R/0/1/0/all/0/1">Rostislav Tsiomenko</a>, <a href="http://arxiv.org/find/cs/1/au:+Rees_B/0/1/0/all/0/1">Bradley S. Rees</a> at July 28, 2014 01:30 AM

Survey of Parallel Computing with MATLAB. (arXiv:1407.6878v1 [cs.DC])

Matlab is one of the most widely used mathematical computing environments in technical computing. It has an interactive environment which provides high performance computing (HPC) procedures and easy to use. Parallel computing with Matlab has been an interested area for scientists of parallel computing researches for a number of years. Where there are many attempts to parallel Matlab. In this paper, we present most of the past,present attempts of parallel Matlab such as MatlabMPI, bcMPI, pMatlab, Star-P and PCT. Finally, we expect the future attempts.

by <a href="http://arxiv.org/find/cs/1/au:+Alyasseri_Z/0/1/0/all/0/1">Zaid Abdi Alkareem Alyasseri</a> at July 28, 2014 01:30 AM

On Partial Wait-Freedom in Transactional Memory. (arXiv:1407.6876v1 [cs.DC])

Transactional memory (TM) is a convenient synchronization tool that allows concurrent threads to declare sequences of instructions on shared data as speculative \emph{transactions} with "all-or-nothing" semantics. It is known that dynamic transactional memory cannot provide \emph{wait-free} progress in the sense that every transaction commits in a finite number of its own steps. In this paper, we explore the costs of providing wait-freedom to only a \emph{subset} of transactions. Since most transactional workloads are believed to be read-dominated, we require that read-only transactions commit in the wait-free manner, while updating transactions are guaranteed to commit only if they run in the absence of concurrency. We show that this kind of partial wait-freedom, combined with attractive requirements like read invisibility or disjoint-access parallelism, incurs considerable complexity costs.

by <a href="http://arxiv.org/find/cs/1/au:+Kuznetsov_P/0/1/0/all/0/1">Petr Kuznetsov</a>, <a href="http://arxiv.org/find/cs/1/au:+Ravi_S/0/1/0/all/0/1">Srivatsan Ravi</a> at July 28, 2014 01:30 AM

Higher-Order Approximate Relational Refinement Types for Mechanism Design and Differential Privacy. (arXiv:1407.6845v1 [cs.PL])

Mechanism design is the study of algorithm design in which the inputs to the algorithm are controlled by strategic agents, who must be incentivized to faithfully report them. Unlike typical programmatic properties, it is not sufficient for algorithms to merely satisfy the property---incentive properties are only useful if the strategic agents also believe this fact.

Verification is an attractive way to convince agents that the incentive properties actually hold, but mechanism design poses several unique challenges: interesting properties can be sophisticated relational properties of probabilistic computations involving expected values, and mechanisms may rely on other probabilistic properties, like differential privacy, to achieve their goals.

We introduce a relational refinement type system, called $\lambda_{\mathsf{ref}}^2$, for verifying mechanism design and differential privacy. We show that $\lambda_{\mathsf{ref}}^2$ is sound w.r.t. a denotational semantics, and correctly models $(\epsilon,\delta)$-differential privacy; moreover, we show that it subsumes DFuzz, an existing linear dependent type system for differential privacy. Finally, we develop an SMT-based implementation of $\lambda_{\mathsf{ref}}^2$ and use it to verify challenging examples of mechanism design, including auctions and aggregative games, and new proposed examples from differential privacy.

by <a href="http://arxiv.org/find/cs/1/au:+Barthe_G/0/1/0/all/0/1">Gilles Barthe</a>, <a href="http://arxiv.org/find/cs/1/au:+Gaboardi_M/0/1/0/all/0/1">Marco Gaboardi</a>, <a href="http://arxiv.org/find/cs/1/au:+Arias_E/0/1/0/all/0/1">Emilio Jes&#xfa;s Gallego Arias</a>, <a href="http://arxiv.org/find/cs/1/au:+Hsu_J/0/1/0/all/0/1">Justin Hsu</a>, <a href="http://arxiv.org/find/cs/1/au:+Roth_A/0/1/0/all/0/1">Aaron Roth</a>, <a href="http://arxiv.org/find/cs/1/au:+Strub_P/0/1/0/all/0/1">Pierre-Yves Strub</a> at July 28, 2014 01:30 AM

Aber-OWL: a framework for ontology-based data access in biology. (arXiv:1407.6812v1 [cs.DB])

Many ontologies have been developed in biology and these ontologies increasingly contain large volumes of formalized knowledge commonly expressed in the Web Ontology Language (OWL). Computational access to the knowledge contained within these ontologies relies on the use of automated reasoning. We have developed the Aber-OWL infrastructure that provides reasoning services for bio-ontologies. Aber-OWL consists of an ontology repository, a set of web services and web interfaces that enable ontology-based semantic access to biological data and literature. Aber-OWL is freely available at this http URL

by <a href="http://arxiv.org/find/cs/1/au:+Hoehndorf_R/0/1/0/all/0/1">Robert Hoehndorf</a>, <a href="http://arxiv.org/find/cs/1/au:+Slater_L/0/1/0/all/0/1">Luke Slater</a>, <a href="http://arxiv.org/find/cs/1/au:+Schofield_P/0/1/0/all/0/1">Paul N. Schofield</a>, <a href="http://arxiv.org/find/cs/1/au:+Gkoutos_G/0/1/0/all/0/1">Georgios V. Gkoutos</a> at July 28, 2014 01:30 AM

Quantum signaling game. (arXiv:1407.6757v1 [cs.GT])

We present a quantum approach to a signaling game; a special kind of extensive games of incomplete information. Our model is based on quantum schemes for games in strategic form where players perform unitary operators on their own qubits of some fixed initial state and the payoff function is given by a measurement on the resulting final state. We show that the quantum game induced by our scheme coincides with a signaling game as a special case and outputs nonclassical results in general. As an example, we consider a quantum extension of the signaling game in which the chance move is a three-parameter unitary operator whereas the players' actions are equivalent to classical ones. In this case, we study the game in terms of Nash equilibria and refine the pure Nash equilibria adapting to the quantum game the notion of a weak perfect Bayesian equilibrium.

by <a href="http://arxiv.org/find/cs/1/au:+Frackiewicz_P/0/1/0/all/0/1">Piotr Frackiewicz</a> at July 28, 2014 01:30 AM

On Distributed Graph Coloring with Iterative Recoloring. (arXiv:1407.6745v1 [cs.DC])

Identifying the sets of operations that can be executed simultaneously is an important problem appearing in many parallel applications. By modeling the operations and their interactions as a graph, one can identify the independent operations by solving a graph coloring problem. Many efficient sequential algorithms are known for this NP-Complete problem, but they are typically unsuitable when the operations and their interactions are distributed in the memory of large parallel computers. On top of an existing distributed-memory graph coloring algorithm, we investigate two compatible techniques in this paper for fast and scalable distributed-memory graph coloring. First, we introduce an improvement for the distributed post-processing operation, called recoloring, which drastically improves the number of colors. We propose a novel and efficient communication scheme for recoloring which enables it to scale gracefully. Recoloring must be seeded with an existing coloring of the graph. Our second contribution is to introduce a randomized color selection strategy for initial coloring which quickly produces solutions of modest quality. We extensively evaluate the impact of our new techniques on existing distributed algorithms and show the time-quality tradeoffs. We show that combining an initial randomized coloring with multiple recoloring iterations yields better quality solutions with the smaller runtime at large scale.

by <a href="http://arxiv.org/find/cs/1/au:+Sariyuce_A/0/1/0/all/0/1">Ahmet Erdem Sar&#x131;y&#xfc;ce</a>, <a href="http://arxiv.org/find/cs/1/au:+Saule_E/0/1/0/all/0/1">Erik Saule</a>, <a href="http://arxiv.org/find/cs/1/au:+Catalyurek_U/0/1/0/all/0/1">&#xdc;mit V. &#xc7;ataly&#xfc;rek</a> at July 28, 2014 01:30 AM

Optimal User-Cell Association for Massive MIMO Wireless Networks. (arXiv:1407.6731v1 [cs.NI])

The use of a very large number of antennas at each base station site (referred to as "Massive MIMO") is one of the most promising approaches to cope with the predicted wireless data traffic explosion. In combination with Time Division Duplex and with simple per-cell processing, it achieves large throughput per cell, low latency, and attractive power efficiency performance. Following the current wireless technology trend of moving to higher frequency bands and denser small cell deployments, a large number of antennas can be implemented within a small form factor even in small cell base stations. In a heterogeneous network formed by large (macro) and small cell BSs, a key system optimization problem consists of "load balancing", that is, associating users to BSs in order to avoid congested hot-spots and/or under-utilized infrastructure. In this paper, we consider the user-BS association problem for a massive MIMO heterogeneous network. We formulate the problem as a network utility maximization, and provide a centralized solution in terms of the fraction of transmission resources (time-frequency slots) over which each user is served by a given BS. Furthermore, we show that such a solution is physically realizable, i.e., there exists a sequence of integer scheduling configurations realizing (by time-sharing) the optimal fractions. While this solution is optimal, it requires centralized computation. Then, we also consider decentralized user-centric schemes, formulated as non-cooperative games where each user makes individual selfish association decisions based only on its local information. We identify a class of schemes such that their Nash equilibrium is very close to the global centralized optimum. Hence, these user-centric algorithms are attractive not only for their simplicity and fully decentralized implementation, but also because they operate near the system "social" optimum.

by <a href="http://arxiv.org/find/cs/1/au:+Bethanabhotla_D/0/1/0/all/0/1">Dilip Bethanabhotla</a>, <a href="http://arxiv.org/find/cs/1/au:+Bursalioglu_O/0/1/0/all/0/1">Ozgun Bursalioglu</a>, <a href="http://arxiv.org/find/cs/1/au:+Papadopoulos_H/0/1/0/all/0/1">Haralabos Papadopoulos</a>, <a href="http://arxiv.org/find/cs/1/au:+Caire_G/0/1/0/all/0/1">Giuseppe Caire</a> at July 28, 2014 01:30 AM

The Undecidability of the Definability of Principal Subcongruences. (arXiv:1301.5588v3 [math.LO] UPDATED)

For each Turing machine T, we construct an algebra A'(T) such that the variety generated by A'(T) has definable principal subcongruences if and only if T halts, thus proving that the property of having definable principal subcongruences is undecidable for a finite algebra. A consequence of this is that there is no algorithm that takes as input a finite algebra a decides whether that algebra is finitely based.

by <a href="http://arxiv.org/find/math/1/au:+Moore_M/0/1/0/all/0/1">Matthew Moore</a> at July 28, 2014 01:30 AM

Image Encryption Using Fibonacci-Lucas Transformation. (arXiv:1210.5912v2 [cs.CR] UPDATED)

Secret communication techniques are of great demand since last 3000 years due to the need of information security and confidentiality at various levels of communication such as while communicating confidential personal data, medical data of patients, defence and intelligence information of countries, data related to examinations etc. With advancements in image processing research, Image encryption and Steganographic techniques have gained popularity over other forms of hidden communication techniques during the last few decades and a number of image encryption models are suggested by various researchers from time to time. In this paper, we are suggesting a new image encryption model based on Fibonacci and Lucas series.

by <a href="http://arxiv.org/find/cs/1/au:+Mishra_M/0/1/0/all/0/1">Minati Mishra</a>, <a href="http://arxiv.org/find/cs/1/au:+Mishra_P/0/1/0/all/0/1">Priyadarsini Mishra</a>, <a href="http://arxiv.org/find/cs/1/au:+Adhikary_M/0/1/0/all/0/1">M.C. Adhikary</a>, <a href="http://arxiv.org/find/cs/1/au:+Kumar_S/0/1/0/all/0/1">Sunit Kumar</a> at July 28, 2014 01:30 AM

StackOverflow

ansible: lineinfile for several lines?

The same way there is "lineinfile" to add one line in a file, is there a way to add several lines? I do not want to use a template because you have to provide the whole file. I just want to add something to an existing file without necessarily knowing what the file already contains so a template is not an option.

Thanks

by YAmikep at July 28, 2014 01:27 AM

/r/compsci

/r/systems

Lamer News

/r/systems

Planet Theory

Topological Similarity of Random Cell Complexes and Applications to Dislocation Configurations

Authors: Benjamin Schweinhart, Jeremy Mason, Robert MacPherson
Download: PDF
Abstract: Although random cell complexes occur throughout the physical sciences, there does not appear to be a standard way to quantify their statistical similarities and differences. The various proposals in the literature are usually motivated by the analysis of particular physical systems and do not necessarily apply to general situations. The central concepts in this paper---the swatch and the cloth---provide a description of the local topology of a cell complex that is general (any physical system that may be represented as a cell complex is admissible) and complete (any statistical question about the local topology may be answered from the cloth). Furthermore, this approach allows a distance to be defined that measures the similarity of the local topology of two cell complexes. The distance is used to identify a steady state of a model dislocation network evolving by energy minimization, and then to rigorously quantify the approach of the simulation to this steady state.

July 28, 2014 12:42 AM

Faster Fully-Dynamic Minimum Spanning Forest

Authors: Jacob Holm, Eva Rotenberg, Christian Wulff-Nilsen
Download: PDF
Abstract: We give a new data structure for the fully-dynamic minimum spanning forest problem in simple graphs. Edge updates are supported in $O(\log^4n/\log\log n)$ amortized time per operation, improving the $O(\log^4n)$ amortized bound of Holm et al. (STOC'98, JACM'01). We assume the Word-RAM model with standard instructions.

July 28, 2014 12:42 AM

GCD Computation of n Integers

Authors: Shri Prakash Dwivedi
Download: PDF
Abstract: Greatest Common Divisor (GCD) computation is one of the most important operation of algorithmic number theory. In this paper we present the algorithms for GCD computation of $n$ integers. We extend the Euclid's algorithm and binary GCD algorithm to compute the GCD of more than two integers.

July 28, 2014 12:41 AM

Faster Separators for Shallow Minor-Free Graphs via Dynamic Approximate Distance Oracles

Authors: Christian Wulff-Nilsen
Download: PDF
Abstract: Plotkin, Rao, and Smith (SODA'97) showed that any graph with $m$ edges and $n$ vertices that excludes $K_h$ as a depth $O(\ell\log n)$-minor has a separator of size $O(n/\ell + \ell h^2\log n)$ and that such a separator can be found in $O(mn/\ell)$ time. A time bound of $O(m + n^{2+\epsilon}/\ell)$ for any constant $\epsilon > 0$ was later given (W., FOCS'11) which is an improvement for non-sparse graphs. We give three new algorithms. The first has the same separator size and running time $O(\mbox{poly}(h)\ell m^{1+\epsilon})$. This is a significant improvement for small $h$ and $\ell$. If $\ell = \Omega(n^{\epsilon'})$ for an arbitrarily small chosen constant $\epsilon' > 0$, we get a time bound of $O(\mbox{poly}(h)\ell n^{1+\epsilon})$. The second algorithm achieves the same separator size (with a slightly larger polynomial dependency on $h$) and running time $O(\mbox{poly}(h)(\sqrt\ell n^{1+\epsilon} + n^{2+\epsilon}/\ell^{3/2}))$ when $\ell = \Omega(n^{\epsilon'})$. Our third algorithm has running time $O(\mbox{poly}(h)\sqrt\ell n^{1+\epsilon})$ when $\ell = \Omega(n^{\epsilon'})$. It finds a separator of size $O(n/\ell) + \tilde O(\mbox{poly}(h)\ell\sqrt n)$ which is no worse than previous bounds when $h$ is fixed and $\ell = \tilde O(n^{1/4})$. A main tool in obtaining our results is a novel application of a decremental approximate distance oracle of Roditty and Zwick.

July 28, 2014 12:41 AM

Near-Linear Time Constant-Factor Approximation Algorithm for Branch-Decomposition of Planar Graphs

Authors: Qianping Gu, Gengchun Xu
Download: PDF
Abstract: We give constant-factor approximation algorithms for branch-decomposition of planar graphs. Our main result is an algorithm which for an input planar graph $G$ of $n$ vertices and integer $k$, in $O(n\log^4n)$ time either constructs a branch-decomposition of $G$ with width at most $(2+\delta)k$, $\delta>0$ is a constant, or a $(k+1)\times \ceil{\frac{k+1}{2}}$ cylinder minor of $G$ implying $\bw(G)>k$, $\bw(G)$ is the branchwidth of $G$. This is the first $\tilde{O}(n)$ time constant-factor approximation for branchwidth/treewidth and largest grid/cylinder minors of planar graphs and improves the previous $\min\{O(n^{1+\epsilon}),O(nk^3)\}$ ($\epsilon>0$ is a constant) time constant-factor approximations. For a planar graph $G$ and $k=\bw(G)$, a branch-decomposition of width at most $(2+\delta)k$ and a $g\times \frac{g}{2}$ cylinder/grid minor with $g=\frac{k}{\beta}$, $\beta>2$ is constant, can be computed by our algorithm in $O(n\log^4n\log k)$ time.

July 28, 2014 12:41 AM

Sim, \'E Poss\'iivel Ordenar Com Complexidade Estritamente Abaixo de $n$ lg $n$

Authors: Rogério H. B. de Lima, Luis A. A. Meira
Download: PDF
Abstract: Sorting is one of the most important problem in the computer science. After more than 60 years of studies, there are still many research devoted to develop faster sorting algorithms. This work aims to explain the Fusion Tree data structure. Fusion Tree was responsible for the first sorting algorithm with time $o(n \ lg n) $.

July 28, 2014 12:41 AM

3SUM Hardness in (Dynamic) Data Structures

Authors: Tsvi Kopelowitz, Seth Pettie, Ely Porat
Download: PDF
Abstract: We prove lower bounds for several (dynamic) data structure problems conditioned on the well known conjecture that 3SUM cannot be solved in $O(n^{2-\Omega(1)})$ time. This continues a line of work that was initiated by Patrascu [STOC 2010] and strengthened recently by Abboud and Vassilevska-Williams [FOCS 2014]. The problems we consider are from several subfields of algorithms, including text indexing, dynamic and fault tolerant graph problems, and distance oracles. In particular we prove polynomial lower bounds for the data structure version of the following problems: Dictionary Matching with Gaps, Document Retrieval problems with more than one pattern or an excluded pattern, Maximum Cardinality Matching in bipartite graphs (improving known lower bounds), d-failure Connectivity Oracles, Preprocessing for Induced Subgraphs, and Distance Oracles for Colors.

Our lower bounds are based on several reductions from 3SUM to a special set intersection problem introduced by Patrascu, which we call Patrascu's Problem. In particular, we provide a new reduction from 3SUM to Patrascu's Problem which allows us to obtain stronger conditional lower bounds for (some) problems that have already been shown to be 3SUM hard, and for several of the problems examined here. Our other lower bounds are based on reductions from the Convolution3SUM problem, which was introduced by Patrascu. We also prove that up to a logarithmic factor, the Convolution3SUM problem is equivalent to 3SUM when the inputs are integers. A previous reduction of Patrascu shows that a subquadratic algorithm for Convolution3SUM implies a similarly subquadratic 3SUM algorithm, but not that the two problems are asymptotically equivalent or nearly equivalent.

July 28, 2014 12:41 AM

Word-packing Algorithms for Dynamic Connectivity and Dynamic Sets

Authors: Casper Kejlberg-Rasmussen, Tsvi Kopelowitz, Seth Pettie, Ely Porat
Download: PDF
Abstract: We examine several (dynamic) graph and set intersection problems in the word-RAM model with word size $w$. We begin with Dynamic Connectivity where we need to maintain a fully dynamic graph $G=(V,E)$ with $n=|V|$ vertices while supporting $(s,t)$-connectivity queries. To do this, we provide a new simplified worst-case solution for the famous Dynamic Connectivity (which is interesting on its own merit), and then show how in the word-RAM model the query and update cost can be reduced to $O(\sqrt{n\frac{\log n}{w}\log(\frac{w}{\log n})})$, assuming $w < n^{1-\Omega(1)}$. Since $w=\Omega(\log n)$, this bound is always $O(\sqrt{n})$ and it is $o(\sqrt{n})$ when $w=\omega(\log n)$.

We then examine the task of maintaining a family $F$ of dynamic sets where insertions and deletions into the sets are allowed, while enabling a set intersection reporting queries on sets from $F$. We first show that given a known upper-bound $d$ on the size of any set, we can maintain $F$ so that a set intersection reporting query costs $O(\frac{d}{w/log^2 w})$ expected time, and updates cost $O(\log w)$ expected time. Using this algorithm we can list all triangles of a graph $G=(V,E)$ in $O(\frac{m\alpha}{w/log^2 w} +t)$ expected time where $m=|E|$, $\alpha$ is the arboricity of $G$, and $t$ is the size of the output. This is comparable with known algorithms that run in $O(m \alpha)$ time.

Next, we provide an incremental data structure on $F$ that supports intersection proof queries where two sets that intersect must return an element in the intersection. Both queries and insertions of elements into sets take $O(\sqrt \frac{N}{w/log^2 w})$ expected time, where $N=\sum_{S\in F} |S|$. Finally, we provide time/space tradeoffs for the fully dynamic set intersection listing problem.

July 28, 2014 12:41 AM

The Power of Two Choices with Simple Tabulation

Authors: Søren Dahlgaard, Mathias Bæk Tejs Knudsen, Eva Rotenberg, Mikkel Thorup
Download: PDF
Abstract: The power of two choices is a classic paradigm used for assigning $m$ balls to $n$ bins. When placing a ball we pick two bins according to some hash functions $h_0$ and $h_1$, and place the ball in the least full bin. It was shown by Azar et al.~[STOC'94] that for $m = O(n)$ with perfectly random hash functions this scheme yields a maximum load of $\lg \lg n + O(1)$ with high probability. The two-choice paradigm has many applications in e.g.~hash tables and on-line assignment of tasks to servers.

In this paper we investigate the two-choice paradigm using the very efficient simple tabulation hashing scheme. This scheme dates back to Zobrist in 1970, and has since been studied by P\v{a}tra\c{s}cu and Thorup [STOC'11]. P\v{a}tra\c{s}cu and Thorup claimed without proof that simple tabulation gives an expected maximum load of $O(\log\log n)$. We improve their result in two ways. We show that the expected maximum load, when placing $m = O(n)$ balls into two tables of $n$ bins, is at most $\lg\lg n + O(1)$. Furthermore, unlike with fully random hashing, we show that with simple tabulation hashing, the maximum load is not bounded by $\lg \lg n + O(1)$, or even $(1+o(1))\lg \lg n$ with high probability. However, we do show that it is bounded by $O(\log \log n)$ with high probability, which is only a constant factor worse than the fully random case. Previously, such bounds have required $\Omega(\log n)$-independent hashing, or other methods that require $\omega(1)$ computation time.

July 28, 2014 12:41 AM

New routing techniques and their applications

Authors: Liam Roditty, Roei Tov
Download: PDF
Abstract: Let $G=(V,E)$ be an undirected graph with $n$ vertices and $m$ edges. We obtain the following new routing schemes:

- A routing scheme for unweighted graphs that uses $\tilde O(\frac{1}{\epsilon} n^{2/3})$ space at each vertex and $\tilde O(1/\epsilon)$-bit headers, to route a message between any pair of vertices $u,v\in V$ on a $(2 + \epsilon,1)$-stretch path, i.e., a path of length at most $(2+\epsilon)\cdot d(u,v)+1$. This should be compared to the $(2,1)$-stretch and $\tilde O(n^{5/3})$ space distance oracle of \patrascu\ and Roditty [FOCS'10 and SIAM J. Comput. 2014] and to the $(2,1)$-stretch routing scheme of Abraham and Gavoille [DISC'11] that uses $\tilde O( n^{3/4})$ space at each vertex.

- A routing scheme for weighted graphs with normalized diameter $D$, that uses $\tilde O(\frac{1}{\epsilon} n^{1/3}\log D)$ space at each vertex and $\tilde O(\frac{1}{\epsilon}\log D)$-bit headers, to route a message between any pair of vertices on a $(5+\epsilon)$-stretch path. This should be compared to the $5$-stretch and $\tilde O(n^{4/3})$ space distance oracle of Thorup and Zwick [STOC'01 and J. ACM. 2005] and to the $7$-stretch routing scheme of Thorup and Zwick [SPAA'01] that uses $\tilde O( n^{1/3})$ space at each vertex. Since a $5$-stretch routing scheme must use tables of $\Omega( n^{1/3})$ space our result is almost tight.

- For an integer $\ell>1$, a routing scheme for unweighted graphs that uses $\tilde O(\ell\frac{1}{\epsilon} n^{\ell/(2\ell \pm 1)})$ space at each vertex and $\tilde O(\frac{1}{\epsilon})$-bit headers, to route a message between any pair of vertices on a $(3\pm2/\ell+\epsilon,2)$-stretch path.

- A routing scheme for weighted graphs, that uses $\tilde O(\frac{1}{\epsilon}n^{1/k}\log D)$ space at each vertex and $\tilde O(\frac{1}{\epsilon}\log D)$-bit headers, to route a message between any pair of vertices on a $(4k-7+\epsilon)$-stretch path.

July 28, 2014 12:41 AM

Chip-firing games on Eulerian digraphs and NP-hardness of computing the rank of a divisor on a graph

Authors: Viktor Kiss, Lilla Tóthmérész
Download: PDF
Abstract: Baker and Norine introduced a graph-theoretic analogue of the Riemann-Roch theory. A central notion in this theory is the rank of a divisor. In this paper we prove that computing the rank of a divisor on a graph is NP-hard.

The determination of the rank of a divisor can be translated to a question about a chip-firing game on the same underlying graph. We prove the NP-hardness of this question by relating chip-firing on directed and undirected graphs.

July 28, 2014 12:40 AM

/r/emacs

C-c C-k in SLIME gives an error, but not when I do C-x C-e?

I have the following line in a Common Lisp program I am writing, in a .lisp file:

(setf urlbase "http://census.soe.com") 

And when I use C-x C-e to evaluate it with slime open, or manually type it into the repl, it correctly assigns the variable urlbase to "http://census.soe.com/". BUT when I attempt to compile the entire file with C-c C-k, it throws an error at that line, saying:

URLBASE is neither declared nor bound, it will be treated as if it were declared SPECIAL. 

What is going on here? Why does this error only emerge when I attempt to compile the entire file?

submitted by mooglinux
[link] [5 comments]

July 28, 2014 12:20 AM

Planet Clojure

Clojure X-Men

https://c2.staticflickr.com/6/5557/14761955842_6a8bf4a66a_n.jpg">

Nobody knows how it happened. Some people think it was due to the rapid expansion and adoption of Clojure. Other people say that the language itself was caused by something deeper and more magical. No one knows for sure. All that we really know is that people starting being born with extraordinary powers. Powers that no human had had before. They were strange and unique to each person they touched. The only thing that they all had in common, was that each was an aspect of the Clojure programming language.

Luke (AKA Lazy Luke)

Luke was a teenager when his powers came to him. His mother always complained that he was lazy. It was true, he did prefer to sleep until noon. He also had a habit of putting everything off to the last minute, like saving all his papers for the night before they were due. One day, though, he noticed something very strange. He could start to see the future. Well not really “see” it. But he could see the infinite possibilities of the future. Not very far into the future, but enough. It was a few milliseconds at first. But now it was up to a full second. He checked the Clojure Docs as soon as he realized his gift. It was lazy evaluation and power to deal with infinite sequences.

Spress

Spress, whose real name is Emily, came into her power early. She was only 5 years old. Her mother had taken her to a farm to visit with the animals. Her mother had pointed at the cow and told her daughter that it said “Moo”. Then, at the horse, saying “Neigh”. Spress smiled and pointed at a bucket and said “cow”. Her mother shook her head at her, but Spress only smiled bigger. She said “cow” again. Then, suddenly, the bucked went “Moo”. She was immediately taken to the Clojure X-Men school, where they identified her power as protocols. She trained her power and now is so good at solving the “expression problem”, she is known as “Spress”.

Multi

Nobody knows Multi’s background. He came to notice in his early twenties with his powers. Ordinary humans process sensory input, (like sight, touch, and sound), in an asynchronous fashion. However, when it gets processed in the brain, it runs into a single pipeline bottleneck – consciousness. Multi’s power is that he can concurrently process his higher level consciousness and reasoning to all this sensory input. The result is that he can move, think, and perform super fast and in a super smart way. He got the power of Clojure’s concurrency.

Dot

Dot always had a way with animals. She had many pets growing up. Later, she would go into the forest and the animals would seek her out. She would be found resting by a tree surrounded by deer and birds. One time, on her walk, she fell down a ditch and had her leg trapped under a log. Her mother arrived, after searching for her, to see a Bear reach down and gently remove the log. She stood dumbfounded, as her daughter thanked the bear and it nodded in reply as it turned away. She could talk with animals effortlessly. She had the power of Clojure’s Interop.

Bob

Bob is the leader of the Clojure X-Men. He seeks out people with the power of Clojure and helps train and educate them. He also is the most powerful. He can come into any argument, problem, or challenge and immediately separate out what is most important to focus on. He always knows the right thing to do, without getting bogged down in unnecessary details . His power is Clojure’s simplicity.

There might be others out there, we don’t know. We can only hope, that they are found by Bob and the Clojure X-Men and use their powers for good.

by Carin Meier at July 28, 2014 12:12 AM

Cljoure X-Men

https://c2.staticflickr.com/6/5557/14761955842_6a8bf4a66a_n.jpg">

Nobody knows how it happened. Some people think it was due to the rapid expansion and adoption of Clojure. Other people say that the language itself was caused by something deeper and more magical. No one knows for sure. All that we really know is that people starting being born with extraordinary powers. Powers that no human had had before. They were strange and unique to each person they touched. The only thing that they all had in common, was that each was an aspect of the Clojure programming language.

Luke (AKA Lazy Luke)

Luke was a teenager when his powers came to him. His mother always complained that he was lazy. It was true, he did prefer to sleep until noon. He also had a habit of putting everything off to the last minute, like saving all his papers for the night before they were due. One day, though, he noticed something very strange. He could start to see the future. Well not really “see” it. But he could see the infinite possibilities of the future. Not very far into the future, but enough. It was a few milliseconds at first. But now it was up to a full second. He checked the Clojure Docs as soon as he realized his gift. It was lazy evaluation and power to deal with infinite sequences.

Spress

Spress, whose real name is Emily, came into her power early. She was only 5 years old. Her mother had taken her to a farm to visit with the animals. Her mother had pointed at the cow and told her daughter that it said “Moo”. Then, at the horse, saying “Neigh”. Spress smiled and pointed at a bucket and said “cow”. Her mother shook her head at her, but Spress only smiled bigger. She said “cow” again. Then, suddenly, the bucked went “Moo”. She was immediately taken to the Clojure X-Men school, where they identified her power as protocols. She trained her power and now is so good at solving the “expression problem”, she is known as “Spress”.

Multi

Nobody knows Multi’s background. He came to notice in his early twenties with his powers. Ordinary humans process sensory input, (like sight, touch, and sound), in an asynchronous fashion. However, when it gets processed in the brain, it runs into a single pipeline bottleneck – consciousness. Multi’s power is that he can concurrently process his higher level consciousness and reasoning to all this sensory input. The result is that he can move, think, and perform super fast and in a super smart way. He got the power of Clojure’s concurrency.

Dot

Dot always had a way with animals. She had many pets growing up. Later, she would go into the forest and the animals would seek her out. She would be found resting by a tree surrounded by deer and birds. One time, on her walk, she fell down a ditch and had her leg trapped under a log. Her mother arrived, after searching for her, to see a Bear reach down and gently remove the log. She stood dumbfounded, as her daughter thanked the bear and it nodded in reply as it turned away. She could talk with animals effortlessly. She had the power of Clojure’s Interop.

Bob

Bob is the leader of the Clojure X-Men. He seeks out people with the power of Clojure and helps train and educate them. He also is the most powerful. He can come into any argument, problem, or challenge and immediately separate out what is most important to focus on. He always knows the right thing to do, without getting bogged down in unnecessary details . His power is Clojure’s simplicity.

There might be others out there, we don’t know. We can only hope, that they are found by Bob and the Clojure X-Men and use their powers for good.

by Carin Meier at July 28, 2014 12:12 AM

Lobsters

HN Daily

July 27, 2014

/r/clojure

StackOverflow

Playframework 2.3.0 issue with Scala IDE (Kepler)

qEnvironment : Java 8, Scala 10, play 2.3.0 sbt plugin, scala ide kepler Version: 4.3.0


Issue : 1. Generated classes for "index.scala.html" and my own created new templates is populating properly into "/my-first-app/target/scala-2.10/classes_managed/views/html" folder but while writing code into application controller I am not able to see these class when I tried to import classes individually instead of import views.html.* but I am not able to do that its not showing generated class I don't know why?

What I have tried :

  1. windows->preference->workspace-> checked the check box "refresh using native hooks or polling" -> clicked apply
  2. after every "~run" I used to refresh my workspace always.
  3. I added output folders manually for
    • /my-first-app/target/scala-2.10/classes_managed/
    • /my-first-app/target/scala-2.10/classes

After trying all still I am not able to import classes individually for any of xyz.scala.html file.


Issue 2: Getting multiple weird errors in xyz.scala.html syntax error for e.g.

Code snippet :

@(title: String)(content: Html) --> Error 
<!DOCTYPE html>
<html>
    <head>
        <title>@title</title>
        <link rel="stylesheet" media="screen" href="@routes.Assets.at("stylesheets/main.css")">
        <link rel="shortcut icon" type="image/png" href="@routes.Assets.at("images/favicon.png")">
        <script src="@routes.Assets.at("javascripts/hello.js")" type="text/javascript"/> 
        </script>
    </head>
    <body>
        @content
    </body>
</html>
  • Error:

    Multiple annotations found at this line:
             *too many arguments for constructor Object: ()Object
             *object templates is not a member of package play
             *type Template2 is not a member of package play.api.templates
             *not found: type BaseScalaTemplate
    

It is really annoying while development its slow downs the development speed. If somebody have any idea regarding above two issues. please share the your feedback asap.

by Ankur Bhargava at July 27, 2014 11:53 PM

CompsciOverflow

How do we make sure in Paxos that we don't propose a different value if a majority has formed?

Recall that Paxos is a distributed system algorithm with the goal that the processes participating in its protocol will reach consensus on one of the valid values.

I was studying Paxos from:

http://research.microsoft.com/en-us/um/people/lamport/pubs/paxos-simple.pdf

and I was confused about one specific part. How does or why does property $P2^b$ satisfy property $P2^c$?

These are the properties:

$P2^b = $ If a proposal with value $v$ is chosen, then every higher-numbered (i.e. approx. later in time) proposal issue by any proposer has value $v$.

$P2^c$ = For any $v$ and $n$, if a proposal with value $v$ and number $n$ is issued, then there is a set $S$ (some majority) of acceptors such that either:

(a)no acceptor in $S$ has accepted any proposal numbered less than $n$, or

(b)$v$ is the value of the highest-numbered proposal among all proposals numbered less than $n$ accepted by the acceptors in $S$ (some majority).

The paper uses $S$ to denote some majority and $C$ to denote some majority that has actually chosen a value.

The thing that I am confused about is, for me $P2^b$ is saying, ok once a value has been chosen, say at sequence number $n$ (i.e. roughly time $n$), then after that time, we want to make sure that any proposer is only able to propose the value of the majority (chosen value). If we have that, then, we do not risk the already formed majority from reverting weirdly. i.e. once we have formed a majority, we want it to stick and stay like that. However, it was not 100% clear to me why property $P2^c$ satisfied that requirement. I kind of see why (a) is a nice property to have, since, having (a) means that its safe to issue a new proposal $(n, v)$ since we contacted some majority $S$ and none of them had accepted anything in a time earlier than now $n$. So, if a majority had formed we would have seen at least one value and we did not see anything accepted, its safe to propose something since a majority has not formed.


Author: Leslie Lamport

Title: Paxos made simple

Institution: Microsoft Research

by Pinocchio at July 27, 2014 11:49 PM

StackOverflow

Why does "htop" show me dozens of PIDs in use by my app, but "ps" only shows me one?

I have a Clojure app that I am developing. I am testing it on the server, mostly by going into a "screen" session and typing:

java -jar lo_login_service-0.2-standalone.jar

and then I kill it by hitting Control-C. Then I make some changes. Then I test it again.

I assume only 1 PID is in use. If I do:

ps aux

I only see 1 PID in use:

das      15028  0.2 22.1 1185300 133520 pts/5  Sl+  Jul26   3:19 java -jar lo_login_service-0.2-standalone.jar

But if I run "htop", then I see:

15029 das        20   0 1157M  130M  9960 S  0.0 22.2  0:25.85 java -jar lo_login_service-0.2-standalone.jar

15030 das        20   0 1157M  130M  9960 S  0.0 22.2  0:07.29 java -jar lo_login_service-0.2-standalone.jar

15031 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.02 java -jar lo_login_service-0.2-standalone.jar

15032 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.25 java -jar lo_login_service-0.2-standalone.jar

15033 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.00 java -jar lo_login_service-0.2-standalone.jar

15034 das        20   0 1157M  130M  9960 S  0.0 22.2  0:14.68 java -jar lo_login_service-0.2-standalone.jar

15035 das        20   0 1157M  130M  9960 S  0.0 22.2  0:11.46 java -jar lo_login_service-0.2-standalone.jar

15036 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.00 java -jar lo_login_service-0.2-standalone.jar

15038 das        20   0 1157M  130M  9960 S  0.0 22.2  0:08.46 java -jar lo_login_service-0.2-standalone.jar

15039 das        20   0 1157M  130M  9960 S  0.0 22.2  0:04.50 java -jar lo_login_service-0.2-standalone.jar

15040 das        20   0 1157M  130M  9960 S  0.0 22.2  0:14.81 java -jar lo_login_service-0.2-standalone.jar

15041 das        20   0 1157M  130M  9960 S  0.0 22.2  0:03.93 java -jar lo_login_service-0.2-standalone.jar

15042 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.09 java -jar lo_login_service-0.2-standalone.jar

15043 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.00 java -jar lo_login_service-0.2-standalone.jar

15044 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.00 java -jar lo_login_service-0.2-standalone.jar

15045 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.00 java -jar lo_login_service-0.2-standalone.jar

15046 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.00 java -jar lo_login_service-0.2-standalone.jar

15047 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.00 java -jar lo_login_service-0.2-standalone.jar

15048 das        20   0 1157M  130M  9960 S  0.0 22.2  0:00.00 java -jar lo_login_service-0.2-standalone.jar

Why does htop show me so many PIDs in use?

by cerhovice at July 27, 2014 11:20 PM

QuantOverflow

How to see the impact of one variable on a set of other variables?

Editing my question:

I have decided to use multiple factor model to model my stress test. I am using factor shock method to implement the propagation of shocks. I am doing this according to a book "Multi-Asset Investing: A Practical Guide to Modern Portfolio Management" by Yoram Lustig. According to this:

"The investor shocks any risk factor in the portfolio by a chosen amount. The adjusted returns of other risk factors are modelled through a covariance matrix based on their correlation with the shocked risk factor. Finally, the hypothetical impact on the portfolio is calculated."

I have run multiple factor regression on 5 factors against the assets in the portfolio and got the respective loadings (betas) for these 5 factors. I also have a covariance matrix of the factor returns. Now, I do not know how can I put a shock in this matrix. My question is how I design now that I shock one variable in the matrix and I get the returns of others. And then I can calculate the predicted/hypothetical return in that scenario.

PS: I cannot use Vector Auto Regression as it is not a recommended method for modelling stock returns. The question may be naive, may be I am missing some really basic point here.

Thanks

Manurag

Thanks for your reply!!

So I will explain a bit more of my problem. I want to perform a stress test on my portfolio of equities. So basically, I want to see the impact of shock in macroeconomic factors (e.g. 10 year interest rate) on the portfolio P&L. Theoretically, my approach is

  1. Obtain a set of relevant macroeconomic factors.
  2. Shock any one of these factors.
  3. Propagate this shock to other factors as the macroeconomic factors are generally correlated. Obtain the new values of factors.
  4. On the basis of risk model, calculate the portfolio P&L based on new set of risk factors.

So I want to know about some model or some similar thing that I can program, preferably in R, so that it becomes possible for me to do, say:

Change interest rate by -10% and see how it effects the portfolio P&L.

Thanks again Manurag

by Manurag at July 27, 2014 11:04 PM

StackOverflow

How to run a Scala script within IntelliJ IDEA?

Here is a trivial scala script:

object test {
  def hi() { print("hi there from here") }
}


test.hi()

From the command line it does the expected:

scala /shared/scaladem/src/main/scala/test.scala
Loading /shared/scaladem/src/main/scala/test.scala...
defined module test
hi there from here
Welcome to Scala version 2.10.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_65).
Type in expressions to have them evaluated.
Type :help for more information.

scala>

But within Intellij it gives a compilation error. Right click | Run test.scala

expected class or object definition
test.hi()
^

BTW I also tried running as a scala worksheet. That was MUCH worse - tons of garbage output and did not even get close to compiling.

Update: it appears there is an older but similar question:

How to run Scala code in Intellij Idea 10

I went into Run Configuration and unchecked "Make" as instructed (this was bothersome but so be it ..)

However after making that change I get a different error:

Exception in thread "main" java.lang.NoClassDefFoundError: scala/Either
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:190)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:113)
Caused by: java.lang.ClassNotFoundException: scala.Either
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 3 more

Note: the Scala-Library is properly set up:

enter image description here

enter image description here

Another update (after @lhuang's comment below) I followed the suggestion to create another project from scratch. In that case a Scala Worksheet worked properly (test.sc). But a scala script (which works when running command line via "scala test.scala" ) still does not work, even in this brand new "scala" project.

by javadba at July 27, 2014 10:23 PM

Lobsters

StackOverflow

Play framework 2.3 debug

I just switched to play 2.3, and I am having trouble getting the debug mode to work. From the console I am running 'activator -jvm-debug 9999', and then from inside intellij I am using the default play2 debug configuration. However, it doesn't seem to work and I'm getting this readout in the debug console:

Connected to the target VM, address: '127.0.0.1:52698', transport: 'socket' [info] Set current project to modules (in build file:/Users/Grillz/IdeaProjects/titan-play-test/.idea/modules/) java.lang.RuntimeException: No main class detected. at scala.sys.package$.error(package.scala:27)

This used to work great in play 2.2.3, is there additional config i need to be doing? I can't seem to find any changes in the migration documentation. Any help would be great. Thanks

by user2921729 at July 27, 2014 10:20 PM

/r/freebsd

StackOverflow

anonymous function performance in PHP [closed]

I'm starting to use functional programming paradigms in php and was wondering what the performance impacts are. Some googling just seems to say that there are some. To be specific, I would like to know:

  • Is there actually a performance impact or is it an urban legend?
  • What is the performance impact (hopefully someone out that has done benchmarks)?
  • What causes this impact (if one exists)?
  • Is it fixed cost, or per execution?

Any resources you guys have would be greatly appreciated :)

Thanks in advance

by amccausl at July 27, 2014 10:11 PM

Planet Scala

Scala: Next Steps

As with every living programming language, Scala will continue to evolve. This document describes where the core Scala team sees the language going in the medium term and where we plan to invest our efforts.

In a nutshell, our main goals are to make the language and its libraries simpler to understand, more robust, and better performing. The features described in this document span the next three major releases of the Scala distribution. Naturally, the planning for later releases is more tentative and fluid than for earlier ones.

Scala 2.12

Scala 2.12’s main theme is Java 8 interoperability. It will support Java 8 lambdas and streams and will allow easy cross calls with these features in both directions. We recently published a detailed feature list and roadmap for this release.

We have not yet decided on version numbers for the releases beyond 2.12, so for the time being we will use opera names as designators.

Scala “Aida”

This release focuses on improving the standard library.

  1. Cleanups and simplification of the collections library: we plan to reduce the size of the collections library, providing some functionality as separate modules. Generally, we want to make them even easier to use and structure them so that they are more amenable to optimizations. Where needed, breaking changes will be announced using deprecation in Scala 2.12; regular use of the collections will likely be unaffected, but custom collections may need to be adapted to the simplified hierarchy.

    1. Reduce reliance on inheritance
    2. Make all default collections immutable (e.g. scala.Seq will be an alias of scala.immutable.Seq)
    3. Other small cleanups that are possible with a rewriting step (e.g. rename mapValues)
  2. Added functionality: We’d like to introduce several new modules, including a couple of spin-offs from the collections library.

    1. Lazy collections through improved views, including Java 8 streams interop.
    2. Parallel collections with performance improvements obtained from operation fusion and more efficient parallel scheduling.
    3. An integrated abstraction to handle validation.
  3. The (independent) scala.meta project aims to establish a new standard for reflection and macro programming. It will be considered for integration in the standard library once it is mature and stable.

  4. As in every Scala release, we’ll also work on improving compiler performance. Since this release focuses on the library, compiler changes will be strictly internal.

Backwards compatibility and migration strategy: The changes to collections might require source code to be rewritten, even though this should be rare. However, we aim to maintain source code compatibility modulo an automatic migration tool (analogous to go fix for Go) that can do the rewriting automatically. Ideally, that tool should be robust and expressive enough to support cross-building.

Prototypes of the new collection functionality and meta-programming libraries will be made available as separate libraries in the Scala 2.12 timeframe, so that projects can experiment with the new features early.

Scala “Don Giovanni”

The main focus for this release is the Scala programming language and its compiler. The new version should provide clear improvements in simplicity, usability and stability, while at the same time staying backwards compatible with current usage of the language.

Areas that will be investigated include the following:

  1. Cleaned-up syntax: The objective is to more clearly expose Scala’s principle of having few orthogonally composable features.

    1. Trait parameters instead of early definition syntax
    2. XML string interpolation instead of XML literals
    3. Procedure syntax is dropped in favor of always defining functions with =
    4. Simplified and unified type syntax for all forms of information elision: existential types and partial type applications are both expressed with _, forSome syntax is eliminated.
  2. Removing puzzlers: There are some features in Scala which are known to be prone to puzzlers, and which can be made safer by tweaking the language. In particular, the following changes would help:

    1. Result types are mandatory for implicit definitions.
    2. Inherited explicit result types take precedence over locally-inferred ones.
    3. Universal toString conversion and concatenation via + should require explicit enabling.
    4. Avoid surprising behavior of auto-tupling.
  3. Simple foundations: This continues the strive for simplicity on the type systems side. We will identify formerly disparate features as specific instances of a small set of concepts. This will help in understanding the individual features and how they hang together. It will also reduce unwanted feature interactions. In particular:

    1. A single fundamental concept - type members - can give a precise meaning to generics, existential types, wildcards, and higher-kinded types.
    2. Intersection and union types make member selection more regular and avoid blow-ups when computing tight upper and lower bounds of sets of types.
    3. Tuples can be decomposed recursively, overcoming current limits to tuple size, and leading to simpler, streamlined native support for abstractions like HLists or HMaps which are currently implemented in some form or other in various libraries.
    4. The type system will have theoretical foundations that are given by a minimal core calculus (DOT).
  4. Better tooling: We will continue to focus on the tooling side, with the goals to improve batch compiler speed and to make the compiler more adapted to fast incremental compilation and IDE presentation support.

  5. Faster code: We plan to improve performance of generated code using optimizations including:

    1. Robust specialization using Miniboxing techniques, applied to collections (a preview of this may already be available in Aida).
    2. Improvements to value classes: Can be array elements, can play part in specializations, can be multi-field.
    3. Optimized implementation of thread-local lazy vals.

We will collaborate here with the Java effort in project Valhalla, which has similar goals.

Backwards compatibility

Since some features are superseded by others, some source code will have to be rewritten. However, using the migration tool described earlier, common Scala code should port automatically. In particular, we aim that all features described in the latest edition of “Programming in Scala” can be ported automatically. However, the porting guarantee will not extend to features that are labelled “experimental”. For some of these (e.g. macros and reflection), we aim to have a replacement that can fulfill analogous functionality, but using different notation and APIs.

Resourcing

Currently, having a feature on the list does not mean that we have already committed the resources to work on this. The roadmap is intended as a framework for the development of future Scala versions. We are happy to take contributions that implement parts of it that are lower on our priority list. Before starting work on a feature not listed here, it must first be accepted for inclusion in the roadmap.

July 27, 2014 10:00 PM

/r/emacs

Creating a portable version of Emacs in my Dropbox

Hello,

I am new to Emacs and I would like to have it with me on various Windows machines in my Dropbox. I tried and failed somehow.

Of course it is possible to extract the emacs zip directory into my Dropbox and to start it on different machines, but the problem is that usually my configuration files and ELPA package files are only in the Appdata/Roaming directory on the local machine.

What would be a good way to put my .emacs, my elisp directory and the ELPA packages in my Dropbox.

I googled and I did not found a solution. Any help would be really appreciated.

PS: bonus question: Is it possible to have also a monospaced font in my dropbox and to use it in Emacs without installing it on the Windows machine. That would be really awesome.

submitted by dahanbn
[link] [13 comments]

July 27, 2014 09:59 PM

CompsciOverflow

Efficiently finding the maximum pairwise GCD of a set of natural numbers

Consider the following problem:

Let $S = \{ s_1, s_2, ... s_n \} $ be a finite subset of the natural numbers.

Let $G = \{$ $gcd(s_i, s_j)$ | $s_i, s_j \in S,$ $ s_i \neq s_j \}$ where $gcd(x,y)$ is the greatest common divisor of $x$ and $y$

Find the maximum element of $G$.

This problem can be solved by taking the greatest common divisor of each pair using Euclid's algorithm and keeping track of the largest one.

Is there a more efficient way of solving this?

by Tommy at July 27, 2014 09:48 PM

TheoryOverflow

Automatically Adapting Forgetting Factor for Online EM

I've been reading some interesting papers recently on methods for automatically and adaptively setting the learning rate in stochastic gradient descent (SGD). In particular, "No more pesky learning rate" and "Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients" --- which are a pair of related papers that deal with a method for automatically adjusting the learning rate during stochastic gradient descent. The first paper introduces the basic method and the second paper talks mostly about how to apply it in practical settings where one typically uses mini-batches and may have a sparse gradient etc.

I was wondering if a similar approach exists for the "online EM algorithm", where one would be learning an adaptive forgetting factor rather than an adaptive learning rate. Clearly, there is a relationship between the two learning approaches, but it seems like more effort has been put into (and thus more results obtained) SGD. Are there similar methods to rid the online EM of it's meta-parameter? Where could I find some publications on this topic?

Thanks!

by nomad at July 27, 2014 09:42 PM

StackOverflow

Why doesn't scala infer the type members of an inherited trait?

I have a group of types that each have their own type member:

sealed trait FieldType {
    type Data
    def parse(in: String): Option[Data]
}
object Name extends FieldType { 
    type Data = String 
    def parse(in: String) = Some(in)
}
object Age extends FieldType { 
    type Data = Int 
    def parse(in: String) = try { Some(in.toInt) } catch { case _ => None }
}

And I have a group of types that operate on sets of the FieldTypes (using boilerplate rather than abstracting over arity):

sealed trait Schema {
    type Schema <: Product
    type Data <: Product
    val schema: Schema
    def read(in: Seq[String]): Option[Data]
}
trait Schema1 extends Schema {
    type D1
    type FT1 <: FieldType { type Data = D1 }
    type Schema = Tuple1[FT1]
    type Data = Tuple1[D1]
    def read(in: Seq[String]) = schema._1.parse(in(0)).map(Tuple1.apply)
}
trait Schema2 extends Schema {
    type D1
    type D2
    type FT1 <: FieldType { type Data = D1 }
    type FT2 <: FieldType { type Data = D2 }
    type Schema = (FT1, FT2)
    type Data = (D1, D2)
    def read(in: Seq[String]) = {
        for {
            f <- schema._1.parse(in(0))
            s <- schema._2.parse(in(1))
        } yield (f, s)
    }
}

I thought I could use this system to elegantly define sets of fields that are meaningful because scala would be able to infer the type members:

class Person extends Schema2 {
    val schema = (Name, Age)
}

However, this doesn't compile! I have to include definitions for all the type members:

class Person extends Schema2 {
    type D1 = String; type D2 = Int
    type FT1 = Name.type; type FT2 = Age.type
    val schema = (Name, Age)
}

How come scala can't infer D1,... and FT1,...? How can I refactor this so I don't have to specify the type variables in Person?

Note: Once I have a better understanding of macros, I plan to use them for the Schema types. Also, I'd rather not use shapeless. It's a great library, but I don't want to pull it in to solve this one problem.

by Dan Gallagher at July 27, 2014 09:27 PM

zfs: How to create pool of two existing vdevs, then mount it

I have two pools (shortened output from zpool status):

  pool: vol1
    vol1
      mirror-0
        ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ004702
        ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ069805
  pool: vol2
    vol2
      mirror-0
        ata-ST4000DM000-1F2168_S300MZ7G
        ata-ST4000DM000-1F2168_S300ELBZ

I want to combine them into one pool, call it "Aquilonde", then mount that on my server in the filesystem as /plex-server. I'm new at zfs. Have googled without much satisfaction. I tried this and variations of it:

~# zpool create Aquilonde vol1 vol2
cannot open 'vol1': no such device in /dev
must be a full path or shorthand device name

by Russ Bateman at July 27, 2014 09:01 PM

Planet FreeBSD

bsdtalk243 – mandoc with Ingo Schwarze

Interview about mandoc with Ingo Schwarze.  The project webpage describes mandoc as "a suite of tools compiling mdoc, the roff macro language of choice for BSD manual pages, and man, the predominant historical language for UNIX manuals."

Recorded at BSDCan 2014.

File Info: 16Min, 8MB.

Ogg Link: http://cis01.uma.edu/~wbackman/bsdtalk/bsdtalk243.ogg

by Will Backman at July 27, 2014 08:55 PM

/r/compsci

StackOverflow

Trait inheritance with <: [duplicate]

This question already has an answer here:

I was used to inherit from a trait like this:

trait A
trait B extends A

But recently I discovered by accident that it is also possible with <: :

trait A
trait B <: A

Why? What is the motivation? (Don't want to hear its written down in the specs.) It is not possible when a class comes into play.

by Peter Schmitz at July 27, 2014 08:20 PM

value resolveOne is not a member of akka.actor.ActorSelection

I get the above error message from here:

implicit val askTimeout = Timeout(60 seconds)
val workerFuture = workerContext actorSelection(payload.classname) resolveOne()
val worker = Await.result(workerFuture, 10 seconds)
worker ask Landau(List("1", "2", "3"))

specifically from the second line.. the import made is

import akka.actor._
import akka.util.Timeout
import akka.pattern.{ ask, pipe }
import scala.concurrent.duration._
import scala.concurrent.Await
import java.util.concurrent.TimeUnit

akka version is 2.2.1 and scala is 2.10.2, i'm using sbt 0.13 to build it all.. I cannot really understand what's wrong, since resolveOne is definetely coming from that package..

EDIT: I made a print of all the methods of the class with

ActorSelection.getClass.getMethods.map(_.getName).foreach { p => println(p)}

and this is the result:

apply
toScala
wait
wait
wait
equals
toString
hashCode
getClass
notify
notifyAll

by Novalink at July 27, 2014 08:17 PM

What is the formal difference in Scala between braces and parentheses, and when should they be used?

What is the formal difference between passing arguments to functions in parentheses () and in braces {}?

The feeling I got from the Programming in Scala book is that Scala's pretty flexible and I should use the one I like best, but I find that some cases compile while others don't.

For instance (just meant as an example; I would appreciate any response that discusses the general case, not this particular example only):

val tupleList = List[(String, String)]()
val filtered = tupleList.takeWhile( case (s1, s2) => s1 == s2 )

=> error: illegal start of simple expression

val filtered = tupleList.takeWhile{ case (s1, s2) => s1 == s2 }

=> fine.

by Jean-Philippe Pellet at July 27, 2014 08:07 PM

How do I import a local java library in clojure? (lein)

I'm trying to use a java library (jar file) called DragonConsole that is not on maven central or clojars.

I want to import this library in my clojure application, but so far I can't figure out how to do so.

I tried setting up a local maven repo, but I don't think I did it right.

lein deps gives me this error:

(Retrieving dragonconsole/dragonconsole/3.0.0/dragonconsole-3.0.0.pom from local)
(Could not transfer artifact dragonconsole:dragonconsole:pom:3.0.0 from/to local)
(file:/home/michael/clj/enclojed/maven_repository/): no supported algorithms found)

project.clj:

:dependencies [[org.clojure/clojure "1.6.0"]
               [clojure-lanterna "0.9.4"]
               [dragonconsole "3.0.0"]]
:repositories [["local" {:url ~(str (.toURI (java.io.File. "maven_repository")))}]]

project folder:

maven_repository/DragonConsolev3.jar
maven_repository/dragonconsole/dragonconsole/maven-metadata-local.xml
maven_repository/dragonconsole/dragonconsole/3.0.0/dragonconsole-3.0.0.pom
doc/...
src/...
test/...
resources/...
project.clj

If there's any other files you need to see, check the git page. Thank you in advance!

by Michael Auderer at July 27, 2014 07:39 PM

IO in the Scala way

I thought up a function that provides types with InputStream and OutputStream:

trait Format[A] {
  def read(i: TypedInput): A
  def write(o: TypedOutput, a: A)
}
trait TypedInput {
  def i: InputStream
  def reads[A](implicit f: Format[A]) = f.read(this)
}
trait TypedOutput {
  def o: OutputStream
  def writes[A](a: A)(implicit f: Format[A]) = f.write(this, a)
}
// and implicit conversions from InputStream to TypdeInput, etc

For example, given a type class of Format[A], now an InputStream can read an instance of A by i.reads[A] and i.reads(FormatOfA):

implicit object IntFormat extends Format[Int] {
  def read(i: TypedInput) = new DataInputStream(i.i).readInt
  def write(o: TypedOutput, a: Int) = new DataOutputStream(o.o).writeInt(a)
}
val i: InputStream = ???
val o: OutputStream = ???
i.reads[Int]
o.writes(3)

When I first thought up this, it looked very Scala with type classes, but now I think it's still very Java, because it just extends java.io so it looks fancy, only with type classes. Please give me some advice to implement this function's concept in more Scala way!

by Ryoichiro Oka at July 27, 2014 07:34 PM

How to assign enum type arguement in scala

object StorageType extends Enumeration{
    type Name = Value
    val HTML, TEXT, SUBJECT = Value
  }

def read(key:String, _type:StorageType.Value = StorageType.HTML):String = {
    val accessKey = getPrefix(_type) + key
    DaoFactory.getPreviewStorageDao.get(accessKey).data
  }

Does this mean I can only send StorageType.HTML as argument and not StorageType.SUBJECT? Also I am pretty new to scala so can you tell me what exactly does _type do here?

by user3851565 at July 27, 2014 07:22 PM

CompsciOverflow

What kind of construct is this? [on hold]

Just for fun, I wanted to define a function in Python that you call like this:

add(1)(2)() = 3
add(1)(2)(3)() = 6

I put this together:

from operator import add

def hey_i_just_met_you(f):
    def and_this_is_crazy(x):
        def but_heres_my_number(y=None):
            return and_this_is_crazy(f(x, y)) if y is not None else x
        return but_heres_my_number
    return and_this_is_crazy

so_call_me_maybe = hey_i_just_met_you(add)

Now so_call_me_maybe acts like the function I wanted. I'm wondering what kind of thing this is (if anything). I wouldn't even know where to look for information about this.

by user1475412 at July 27, 2014 07:19 PM

StackOverflow

In Scala, what does "view" do?

Specifically I'm looking at Problem 1 here

http://pavelfatin.com/scala-for-project-euler/

The code as listed is as follows

val r = (1 until 1000).view.filter(n => n % 3 == 0 || n % 5 == 0).sum

I can follow everything except for "view". In fact if I take out view the code still compiles and produces exactly the same answer.

by deltanovember at July 27, 2014 07:15 PM

Triangular number sequence using Scala's Fibionacci

The aim is to create a Triangular number sequence (1, 3, 6, 10, 15, 21) using Scala's Fibionacci and the sixth digit need to be returned.

Test

test("triangular") {
  assert(Calculation.triangular(1, 3) === 21)
}

Main

def triangular(a: Int, b: Int) : Int = {
  lazy val s: Stream[Int] = a #:: s.scanLeft(b)(_+_)
  s(5)
}

Outcome

[info] - triangular *** FAILED ***
[info]   18 did not equal 21 (CalculationTest.scala:37)
[error] Failed: Total 9, Failed 1, Errors 0, Passed 8
[error] Failed tests:
[error]         testingscala.CalculationTest
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 5 s, completed Jul 27, 2014 7:55:16 PM

scanLeft will work to create a Fibionacci sequence, but not to create a Triangular one.

Which option needs to be used to add 2, 3, 4, 5 and subsequently 6 to the last digit, which would result in a sequence of 1, 3, 6, 10, 15, 21?

by utrecht at July 27, 2014 07:11 PM

Best practice for modifying collection attributes functionally

I'm curious how to go about implementing a class that obeys strict functional programming rules.

For example, if I have a class that has two attributes and I have a method that modifies them, how would I go about doing so? The attributes would be private vals, and the method would have to return a new instance of the attribute every time it modifies it. This is fine, except I would like to keep it contained, meaning I would need a way to set the val.

What would be the best way to do this so that I don't have thousands of instances of say, a collection, floating around in memory?

by user3789100 at July 27, 2014 07:05 PM

Planet Clojure

Clojure Destructuring Tutorial….

Clojure Destructuring Tutorial and Cheat Sheet by John Louis Del Rosario.

From the post:

When I try to write or read some Clojure code, every now and then I get stuck on some destructuring forms. It’s like a broken record. One moment I’m in the zone, then this thing hits me and I have to stop what I’m doing to try and grok what I’m looking at.

So I decided I’d write a little tutorial/cheatsheet for Clojure destructuring, both as an attempt to really grok it (I absorb stuff more quickly if I write it down), and as future reference for myself and others.

Below is the whole thing, copied from the original gist. I’m planning on adding more (elaborate) examples and a section for compojure’s own destructuring forms. If you want to bookmark the cheat sheet, I recommend the gist since it has proper syntax highlighting and will be updated first.

John’s right, the gist version is easier to read.

As of 27 July 2014, the sections on “More Examples” and “Compojure” are blank if you feel like contributing.

I first saw this in a tweet by Daniel Higginbotham.

by Patrick Durusau at July 27, 2014 07:04 PM

StackOverflow

How to find max from a list of tuples in Scala?

I have the following list of tuples in scala, How to find the tuple with the max key or max value .

val arr = List(('a',10),('b',2),('c',3))

Max Key lexicographically - c Max Value - 10

by Shakti at July 27, 2014 07:01 PM

Planet Clojure

The Simplicity of Clojure

The Simplicity of Clojure by Bridget Hillyer and Clinton N. Dreisbach. OSCON 2014.

A great overview of Clojure that covers:

  • Clojure Overview
  • Collections
  • Sequences
  • Modeling
  • Functions
  • Flow Control
  • Polymorphism
  • State
  • Coljure Libraries

Granted they are slides so you need to fill in with other sources of content, such as Clojure for the Brave and True, but they do provide an outline for learning more.

I first saw this in a tweet by Christophe Lalanne.

by Patrick Durusau at July 27, 2014 06:47 PM

StackOverflow

ansible : how to pass multiple commands

I tried this :

- command: ./configure chdir=/src/package/
- command: /usr/bin/make chdir=/src/package/
- command: /usr/bin/make install chdir=/src/package/

which works but I guess there is something more.. neat.

So I tried this :

from : multiple commands in the same line for bruker topspin which give me back "no such file or directory"

- command: ./configure;/usr/bin/make;/usr/bin/make install chdir=/src/package/

I tried this too : https://u.osu.edu/hasnan.1/2013/12/16/ansible-run-multiple-commands-using-command-module-and-with-items/

but I couldn't find the right syntax to put :

- command: "{{ item }}" chdir=/src/package/
  with_items:
      ./configure
      /usr/bin/make
      /usr/bin/make install

That does not work, saying there is a quote issue.

Anyone ? Thank you

by John Doe at July 27, 2014 06:45 PM

/r/compsci

TheoryOverflow

Any examples of the following error detecting code?

A 2k-bit input is split into two k-bit halves, one half is chosen at random to be transmitted in the clear. The other half is encoded with the first half to produce a code, called X, of length 2k-1 bits with 50% probability and 2k bits with 50% probability. The coded message is then 3k-1 or 3k bits. This code is transmitted and can correct at most k/2 error bits if errors are introduced randomly.

The code also has the property that X can be transmitted without the in-the-clear first half, and the two halves can still be decoded from X.

Are there any well known examples of error detecting codes that operate like this?

Unnecessary for the question but included anyway is the following additional information : In the second case, the decoding algorithm is slower. In this case the code has the property that it is compressive (50% of time 1 bit less), but not correcting. It can still detect errors with exponentially decreasing probability to the number of error bits.

by Cris at July 27, 2014 06:28 PM

Lobsters

/r/emacs

Display preformatted org code with org2blog

Hi,

Whenever I use org2blog and need to display a snippet of org code, I try the following:

#+BEGIN_EXAMPLE * Task 1 ** Subtask * Task 2 #+END_EXAMPLE 

or even this:

#+BEGIN_SRC org * Task 1 ** Subtask * Task 2 #+END_SRC 

However, these don't work as expected. They are transformed to HTML as h1, h2, etc, as if they weren't surrounded by any #+BEGIN_* or #+END_*.

I want the above snippets to be formated with pre (if possible, with TODO and DONE properly colored, but this is not a must), or similar. Any tips on this?

submitted by DeathStarAway
[link] [5 comments]

July 27, 2014 06:12 PM

QuantOverflow

Brownian Bridge's first passage time distribution

Let's say we have a Brownian Bridge $Y_{b,T}(t)$ such that $Y_{b,T}(0)=0$, $Y_{b,T}(T)=b$.

Let's say we are interested in the first passage time of $Y_{b,T}(t)$ at level $b$: $\tau_b = \{\min \tau; Y_{b,T}(\tau)=b\}$.

How could I calculate the distribution of $\tau_b$?

by athos at July 27, 2014 06:00 PM

Selling an American call option early

I understand it is never optimal to exercise an American call option early. [1] [2] However, here are my two contradictory thoughts about selling an American call option early.

Assumptions

  1. I can only buy or sell a call option, never exercise it.
  2. I am continuously bullish on the underlying stock.

Contradictory thoughts

  1. The probability of touching is twice the probability of expiring in the money. [3] This implies that the call option is twice as likely to meet a profit target prior to expiration than at expiration. Thus, it is more profitable to sell early.
  2. On the other hand, because I am continuously bullish on the underlying stock, it would make sense to wait for the stock to appreciate. Thus, it is more profitable to sell the call at the last minute.

Thus, is it ever optimal to sell an American call option early?

by cona at July 27, 2014 05:58 PM

CompsciOverflow

What is the complete version of the paper: "How to Generate and Exchange Secrets (extended abstract)" by Andrew Yao?

I've found numerous places that claim that the paper "How to Generate and Exchange Secrets" by Andrew Yao introduces garbled circuits as a solution to the secure multiparty computation problem. However, I can only seem to locate the extended abstract which lacks proofs and does not seem to mention garbled circuits. It only appears to define useful properties for solving the problem and state several theorems without proof. A complete version of the paper is mentioned at the end of this extended abstract, but I cannot locate it. Is the complete version under a different name?

I have already found expositions that explain garbled circuits. At this point I am interested in finding the complete paper, if possible.

by user2309167 at July 27, 2014 05:55 PM

StackOverflow

"no hosts matched" issue with Vagant and Ansible

This is driving me crazy and I can't seem to resolve it.

I have installed Vagrant, VirtualBox and Ansible and trying to run provision over one host but it always returns "skipping: no hosts matched"

The head of my playbook file looks like this:

---
- hosts: webservers
  user: vagrant
  sudo: yes

and my /etc/ansible/hosts file looks like this:

[webservers]
webserver1

I tried putting the IP address there but had the same result. I have added my ssh key to the server and added webserver1 host to both .ssh/config and /etc/hosts

I can ssh vagrant@websrver1 fine without being prompted for a password, thanks to using the ssh key.

What am I missing here?

  • Host: Debian 7.2
  • Client machine: Debian 7
  • Virtualbox: 4.1.18
  • Vangrantup: 1.4.1
  • Ansible: 1.5

by Dubby at July 27, 2014 05:14 PM

How do I access a method-owned case class's companion?

I've been working with macros and case classes, but while testing I've found that "method-owned" case classes behave differently than non-method owned. Am I missing something? Is there a workaround? Or is this a bug?

In my minimal testcase below I generate an implementation for foo that returns "works" or "fails" if it finds the type's companion symbol. Ie. C1 works, C2 fails.

class Macro(val c: blackbox.Context) {
  import c.universe._

  def impl[A: c.WeakTypeTag]: c.Expr[String] = {
    weakTypeOf[A].typeSymbol.companion match {
      case NoSymbol => c.Expr[String](q""""fails"""")
      case _        => c.Expr[String](q""""works"""")
    }
  }
}

object TestSpec {
  object C1 { def foo: String = macro Macro.impl[C1] }
  case class C1()

  def main(args: Array[String]) = {
    assert("works" == C1.foo, "C1 fails")

    object C2 { def foo: String = macro Macro.impl[C2] }
    case class C2()

    assert("works" == C2.foo, "C2 fails")
  }
}

For reference: Scala 2.11.2, sbt 0.13.5

by Dale Wijnand at July 27, 2014 05:07 PM

/r/netsec

StackOverflow

Relative performance impact of logging in Scala

I've seen the following ways of logging. What's the runtime performance impact of each? Are there any other benefits of using one over the other.

val strValue = "xyz"
val intValue = 200

log.debug("This will log the string " + strValue + " and the int " + intValue) 

log.debug(s"This will log the string $strValue and the int $intValue" )  

log.debug("This will log the string %s and the int %d".format(strValue,intValue))

log.debug("This will log the string {} and the int {}", strValue, intValue) 

Most of my current logging needs are inside Play and Akka projects.

by Soumya Simanta at July 27, 2014 05:06 PM

Compojure framework benchmark

I try to use https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/compojure to run this framework benchmark in clojure.

my ns hello.handler is:

(ns hello.handler
  (:import com.mchange.v2.c3p0.ComboPooledDataSource)
  (:use compo`enter code here`jure.core
        ring.middleware.json
        ring.util.response
        korma.db
        korma.core
        hiccup.core
        hiccup.util
        hiccup.page)
  (:require [compojure.handler :as handler]
            [compojure.route :as route]
            [clojure.java.jdbc :as jdbc]
            [clojure.java.jdbc.sql :as sql]))

Unfortunately I get this error message...

Exception namespace 'compojure.core' not found after loading '/compojure/core' clojure.core/load-one (core.clj:5339)

What is going wrong??

my project.clj is:

(defproject hello "compojure"
  :description "JSON/Database tests"
  :url "http://localhost:3000/"
  :dependencies [[org.clojure/clojure "1.5.1"]
                 [compojure "1.1.6"]
                 [ring/ring-json "0.2.0"]
                 [korma "0.3.0-RC6"]
                 [log4j "1.2.15" :exclusions [javax.mail/mail javax.jms/jms com.sun.jdmk/jmxtools com.sun.jmx/jmxri]]
                 [mysql/mysql-connector-java "5.1.6"]
                 [org.clojure/java.jdbc "0.3.0-alpha1"]
                 [c3p0/c3p0 "0.9.1.2"]
                 [hiccup "1.0.4"]
                 ]
  :plugins [[lein-ring "0.8.10"]]
  :ring {:handler hello.handler/app}
  :profiles
  {:dev {:dependencies [[ring-mock "0.1.5"]]}})

by Spyros at July 27, 2014 05:04 PM

TheoryOverflow

Type theory for memory safe data structures

Data structures such as a doubly linked list and a B+ tree have blocks of memory that have multiple pointers to it. This creates the risk that a bug will allow memory to be accessed after being freed.

I have heard of ideas based around linear typing for guaranteeing memory safety in the case of a single pointer to the memory. I believe the Rust language is based around this.

But suppose, for example, I wanted a memory safe implementation of a B+ tree in a systems language that has no garbage collection or reference counting. What exists in type theory that could be used to guarantee that the B+ tree implementation is memory safe?

by user782220 at July 27, 2014 04:59 PM

CompsciOverflow

Smarter recursion to compute #tilings of $m \times n$ board with small shapes that fit in $2 \times 2$ square?

This is a generalization of another question I posted because I wasn't clear that I cared about more than $2 \times 1$ dominoes (it's just a special case), and there is an explicit tractable formula for $2 \times 1$ dominoes. I was wondering about how to computationally (e.g., with recursion) obtain the number of tilings of an $m \times n$ board with a given subset of the contiguous shapes that are subregions of a $2 \times 2$ square (the $1 \times 1$ square, $2 \times 1$ dominoes, the $2 \times 2$ square, and the L-shapes).

If $m \leq n$, then we can use recursion on $n$ by keeping track of the which squares are filled in the $n$th column, which gives $2^m$ cases to keep track of for each value of $n$, with the cases related between $n-1$ and $n$ based on how the empty spaces in the $(n-1)$th column are filled with the various shapes.

I was wondering, are there any significant improvements possible for this type of recursion? For example can we take advantage of some symmetries somehow? If improvements are possible, can it change the asymptotics for the computational time? For the current recursion, if the coefficients expressing cases for the $n$th column in terms of cases for the $(n-1)$th column are precomputed, then the running time is $O(nM)$ where $M = O(2^{2m})$ is the number of non-zero coefficients. If $n \gg m$ then successive matrix squaring can be used to get the running time to $O(2^{3m} \log n)$.

by user2566092 at July 27, 2014 04:58 PM

StackOverflow

Slick 2.10-RC1, Scala 2.11.x, bypassing 22 arity limit with case class (heterogenous)

I am having issues in mapping a Table that has > 22 columns specifically into a case class, assuming you have the following code

import slick.driver.PostgresDriver
import scala.slick.collection.heterogenous._
import syntax._
import shapeless.Generic

case class TwentyThreeCaseClass(
    val id:Option[Long],
    val one:String,
    val two:String,
    val three:String,
    val four:String,
    val five:String,
    val six:String,
    val seven:String,
    val eight:String,
    val nine:String,
    val ten:String,
    val eleven:String,
    val twelve:String,
    val thirteen:String,
    val fourteen:String,
    val fifteen:String,
    val sixteen:String,
    val seventeen:String,
    val eighteen:String,
    val nineteen:String,
    val twenty:String,
    val twentyOne:String,
    val twentyTwo:String,
    val twentyThree:String,
    val twentyFour:String
)

class TwentyThreeTable(tag:Tag) extends Table[TwentyThreeCaseClass](tag,"twenty_three_table") {
    def id = column[Long]("id",O.PrimaryKey,O.AutoInc)
    def one = column[String]("one")
    def two = column[String]("two")
    def three = column[String]("three")
    def four = column[String]("four")
    def five = column[String]("five")
    def six = column[String]("six")
    def seven = column[String]("seven")
    def eight = column[String]("eight")
    def nine = column[String]("nine")
    def ten = column[String]("ten")
    def eleven = column[String]("eleven")
    def twelve = column[String]("twelve")
    def thirteen = column[String]("thirteen")
    def fourteen = column[String]("fourteen")
    def fifteen = column[String]("fifteen")
    def sixteen = column[String]("sixteen")
    def seventeen = column[String]("seventeen")
    def eighteen = column[String]("eighteen")
    def nineteen = column[String]("nineteen")
    def twenty = column[String]("twenty")
    def twentyOne = column[String]("twentyOne")
    def twentyTwo = column[String]("twentyTwo")
    def twentyThree = column[String]("twentyThree")
    def twentyFour = column[String]("twentyFour")

    private def iso[L <: HList, M <: HList](l: L)
                                 (implicit iso: Generic.Aux[TwentyThreeCaseClass, M], eq: L =:= M): TwentyThreeCaseClass = iso.from(l)

    def * =
        id.? ::
        one ::
        two ::
        three ::
        four ::
        five ::
        six ::
        seven ::
        eight ::
        nine ::
        ten ::
        elven ::
        twelve ::
        thirteen ::
        fourteen ::
        fifteen ::
        sixteen ::
        seventeen ::
        eighteen ::
        nineteen ::
        twenty ::
        twentyOne ::
        twentyTwo ::
        twentyThree ::
        twentyFour ::
        HNil
        // Do stuff here to map to a case class

}

How exactly would you go about constructing/extracting the table into the TwentyThreeCaseClass. Example code is given on how to make a slick Table map to a HList, but code is not given on how to map a Table to a case class > 22 parameters via a HList (you can't use tuples, because the arity limit in Scala still applies for tuples, you can't make a tuple with more then twenty two elements)

The iso is there, because we use this generic iso code to map from a HList to a case class with the same shape in our shapeless code outside of slick, so theoretically speaking you should be able to use iso to construct the case class from the HList, i just have no idea how to use iso in the context of slick shapes

EDIT: There is the same question asked on the slick github as an issue here https://github.com/slick/slick/issues/519#issuecomment-48327043

by mdedetrich at July 27, 2014 04:12 PM